00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 633 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3298 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.023 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.024 The recommended git tool is: git 00:00:00.024 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.059 Using shallow fetch with depth 1 00:00:00.059 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.059 > git --version # timeout=10 00:00:00.079 > git --version # 'git version 2.39.2' 00:00:00.079 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.108 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.108 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.283 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.292 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.302 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:03.302 > git config core.sparsecheckout # timeout=10 00:00:03.312 > git read-tree -mu HEAD # timeout=10 00:00:03.327 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:03.342 Commit message: "packer: Add bios builder" 00:00:03.343 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:03.450 [Pipeline] Start of Pipeline 00:00:03.465 [Pipeline] library 00:00:03.467 Loading library shm_lib@master 00:00:03.467 Library shm_lib@master is cached. Copying from home. 00:00:03.485 [Pipeline] node 00:00:03.504 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:03.506 [Pipeline] { 00:00:03.519 [Pipeline] catchError 00:00:03.521 [Pipeline] { 00:00:03.537 [Pipeline] wrap 00:00:03.547 [Pipeline] { 00:00:03.556 [Pipeline] stage 00:00:03.558 [Pipeline] { (Prologue) 00:00:03.742 [Pipeline] sh 00:00:04.022 + logger -p user.info -t JENKINS-CI 00:00:04.040 [Pipeline] echo 00:00:04.042 Node: WFP21 00:00:04.050 [Pipeline] sh 00:00:04.350 [Pipeline] setCustomBuildProperty 00:00:04.362 [Pipeline] echo 00:00:04.364 Cleanup processes 00:00:04.369 [Pipeline] sh 00:00:04.649 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.650 1384603 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.662 [Pipeline] sh 00:00:04.947 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.947 ++ grep -v 'sudo pgrep' 00:00:04.947 ++ awk '{print $1}' 00:00:04.947 + sudo kill -9 00:00:04.947 + true 00:00:04.962 [Pipeline] cleanWs 00:00:04.970 [WS-CLEANUP] Deleting project workspace... 00:00:04.970 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.976 [WS-CLEANUP] done 00:00:04.980 [Pipeline] setCustomBuildProperty 00:00:04.994 [Pipeline] sh 00:00:05.275 + sudo git config --global --replace-all safe.directory '*' 00:00:05.354 [Pipeline] httpRequest 00:00:05.381 [Pipeline] echo 00:00:05.382 Sorcerer 10.211.164.101 is alive 00:00:05.390 [Pipeline] httpRequest 00:00:05.394 HttpMethod: GET 00:00:05.395 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.395 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.397 Response Code: HTTP/1.1 200 OK 00:00:05.397 Success: Status code 200 is in the accepted range: 200,404 00:00:05.398 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.241 [Pipeline] sh 00:00:06.521 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.534 [Pipeline] httpRequest 00:00:06.560 [Pipeline] echo 00:00:06.561 Sorcerer 10.211.164.101 is alive 00:00:06.568 [Pipeline] httpRequest 00:00:06.572 HttpMethod: GET 00:00:06.572 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:06.573 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:06.589 Response Code: HTTP/1.1 200 OK 00:00:06.589 Success: Status code 200 is in the accepted range: 200,404 00:00:06.590 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:21.061 [Pipeline] sh 00:01:21.348 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:23.899 [Pipeline] sh 00:01:24.183 + git -C spdk log --oneline -n5 00:01:24.184 dbef7efac test: fix dpdk builds on ubuntu24 00:01:24.184 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:24.184 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:24.184 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:24.184 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:24.201 [Pipeline] withCredentials 00:01:24.212 > git --version # timeout=10 00:01:24.223 > git --version # 'git version 2.39.2' 00:01:24.240 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.242 [Pipeline] { 00:01:24.251 [Pipeline] retry 00:01:24.253 [Pipeline] { 00:01:24.268 [Pipeline] sh 00:01:24.553 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:25.136 [Pipeline] } 00:01:25.158 [Pipeline] // retry 00:01:25.163 [Pipeline] } 00:01:25.185 [Pipeline] // withCredentials 00:01:25.192 [Pipeline] httpRequest 00:01:25.204 [Pipeline] echo 00:01:25.205 Sorcerer 10.211.164.101 is alive 00:01:25.211 [Pipeline] httpRequest 00:01:25.215 HttpMethod: GET 00:01:25.216 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.217 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.218 Response Code: HTTP/1.1 200 OK 00:01:25.219 Success: Status code 200 is in the accepted range: 200,404 00:01:25.219 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.291 [Pipeline] sh 00:01:28.574 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:29.966 [Pipeline] sh 00:01:30.248 + git -C dpdk log --oneline -n5 00:01:30.248 caf0f5d395 version: 22.11.4 00:01:30.248 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:30.248 dc9c799c7d vhost: fix missing spinlock unlock 00:01:30.248 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:30.248 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:30.258 [Pipeline] } 00:01:30.271 [Pipeline] // stage 00:01:30.279 [Pipeline] stage 00:01:30.281 [Pipeline] { (Prepare) 00:01:30.298 [Pipeline] writeFile 00:01:30.314 [Pipeline] sh 00:01:30.596 + logger -p user.info -t JENKINS-CI 00:01:30.609 [Pipeline] sh 00:01:30.893 + logger -p user.info -t JENKINS-CI 00:01:30.905 [Pipeline] sh 00:01:31.187 + cat autorun-spdk.conf 00:01:31.187 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.187 SPDK_TEST_NVMF=1 00:01:31.187 SPDK_TEST_NVME_CLI=1 00:01:31.188 SPDK_TEST_NVMF_NICS=mlx5 00:01:31.188 SPDK_RUN_UBSAN=1 00:01:31.188 NET_TYPE=phy 00:01:31.188 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.188 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:31.195 RUN_NIGHTLY=1 00:01:31.200 [Pipeline] readFile 00:01:31.224 [Pipeline] withEnv 00:01:31.226 [Pipeline] { 00:01:31.239 [Pipeline] sh 00:01:31.523 + set -ex 00:01:31.523 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:31.523 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:31.523 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.523 ++ SPDK_TEST_NVMF=1 00:01:31.523 ++ SPDK_TEST_NVME_CLI=1 00:01:31.523 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:31.523 ++ SPDK_RUN_UBSAN=1 00:01:31.523 ++ NET_TYPE=phy 00:01:31.523 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.523 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:31.523 ++ RUN_NIGHTLY=1 00:01:31.523 + case $SPDK_TEST_NVMF_NICS in 00:01:31.523 + DRIVERS=mlx5_ib 00:01:31.523 + [[ -n mlx5_ib ]] 00:01:31.523 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:31.523 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:38.097 rmmod: ERROR: Module irdma is not currently loaded 00:01:38.097 rmmod: ERROR: Module i40iw is not currently loaded 00:01:38.097 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:38.097 + true 00:01:38.097 + for D in $DRIVERS 00:01:38.097 + sudo modprobe mlx5_ib 00:01:38.097 + exit 0 00:01:38.105 [Pipeline] } 00:01:38.122 [Pipeline] // withEnv 00:01:38.128 [Pipeline] } 00:01:38.144 [Pipeline] // stage 00:01:38.153 [Pipeline] catchError 00:01:38.155 [Pipeline] { 00:01:38.171 [Pipeline] timeout 00:01:38.171 Timeout set to expire in 1 hr 0 min 00:01:38.173 [Pipeline] { 00:01:38.189 [Pipeline] stage 00:01:38.191 [Pipeline] { (Tests) 00:01:38.206 [Pipeline] sh 00:01:38.488 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:38.489 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:38.489 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:38.489 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:38.489 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:38.489 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:38.489 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:38.489 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:38.489 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:38.489 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:38.489 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:38.489 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:38.489 + source /etc/os-release 00:01:38.489 ++ NAME='Fedora Linux' 00:01:38.489 ++ VERSION='38 (Cloud Edition)' 00:01:38.489 ++ ID=fedora 00:01:38.489 ++ VERSION_ID=38 00:01:38.489 ++ VERSION_CODENAME= 00:01:38.489 ++ PLATFORM_ID=platform:f38 00:01:38.489 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:38.489 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.489 ++ LOGO=fedora-logo-icon 00:01:38.489 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:38.489 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.489 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:38.489 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.489 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.489 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.489 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:38.489 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.489 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:38.489 ++ SUPPORT_END=2024-05-14 00:01:38.489 ++ VARIANT='Cloud Edition' 00:01:38.489 ++ VARIANT_ID=cloud 00:01:38.489 + uname -a 00:01:38.489 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:38.489 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:41.022 Hugepages 00:01:41.022 node hugesize free / total 00:01:41.022 node0 1048576kB 0 / 0 00:01:41.282 node0 2048kB 0 / 0 00:01:41.282 node1 1048576kB 0 / 0 00:01:41.282 node1 2048kB 0 / 0 00:01:41.282 00:01:41.282 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.282 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:41.282 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:41.282 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:41.282 + rm -f /tmp/spdk-ld-path 00:01:41.282 + source autorun-spdk.conf 00:01:41.282 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.282 ++ SPDK_TEST_NVMF=1 00:01:41.282 ++ SPDK_TEST_NVME_CLI=1 00:01:41.282 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:41.282 ++ SPDK_RUN_UBSAN=1 00:01:41.282 ++ NET_TYPE=phy 00:01:41.282 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.282 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.282 ++ RUN_NIGHTLY=1 00:01:41.282 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.282 + [[ -n '' ]] 00:01:41.282 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:41.282 + for M in /var/spdk/build-*-manifest.txt 00:01:41.282 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.282 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:41.282 + for M in /var/spdk/build-*-manifest.txt 00:01:41.282 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.282 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:41.542 ++ uname 00:01:41.542 + [[ Linux == \L\i\n\u\x ]] 00:01:41.542 + sudo dmesg -T 00:01:41.542 + sudo dmesg --clear 00:01:41.542 + dmesg_pid=1385708 00:01:41.542 + [[ Fedora Linux == FreeBSD ]] 00:01:41.542 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.542 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.542 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.542 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:41.542 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:41.542 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.542 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.542 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.542 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.542 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.542 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.542 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.542 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.542 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.542 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.542 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.542 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:41.542 + sudo dmesg -Tw 00:01:41.542 Test configuration: 00:01:41.542 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.542 SPDK_TEST_NVMF=1 00:01:41.542 SPDK_TEST_NVME_CLI=1 00:01:41.542 SPDK_TEST_NVMF_NICS=mlx5 00:01:41.542 SPDK_RUN_UBSAN=1 00:01:41.542 NET_TYPE=phy 00:01:41.542 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.542 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.542 RUN_NIGHTLY=1 21:05:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:41.542 21:05:16 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.542 21:05:16 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.542 21:05:16 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.542 21:05:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.542 21:05:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.542 21:05:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.542 21:05:16 -- paths/export.sh@5 -- $ export PATH 00:01:41.542 21:05:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.542 21:05:16 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:41.542 21:05:16 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:41.542 21:05:16 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722020716.XXXXXX 00:01:41.542 21:05:16 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722020716.YIfhVm 00:01:41.542 21:05:16 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:41.542 21:05:16 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:01:41.542 21:05:16 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.542 21:05:16 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:41.542 21:05:16 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:41.542 21:05:16 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.542 21:05:16 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:41.542 21:05:16 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:41.542 21:05:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.542 21:05:16 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:41.542 21:05:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.542 21:05:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.542 21:05:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:41.542 21:05:16 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.542 Fri Jul 26 07:05:16 PM UTC 2024 00:01:41.542 21:05:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.542 LTS-60-gdbef7efac 00:01:41.542 21:05:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.542 21:05:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.542 21:05:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.542 21:05:16 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:41.542 21:05:16 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:41.542 21:05:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.542 ************************************ 00:01:41.542 START TEST ubsan 00:01:41.542 ************************************ 00:01:41.542 21:05:16 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:41.542 using ubsan 00:01:41.542 00:01:41.542 real 0m0.000s 00:01:41.542 user 0m0.000s 00:01:41.542 sys 0m0.000s 00:01:41.542 21:05:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.542 21:05:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.542 ************************************ 00:01:41.542 END TEST ubsan 00:01:41.542 ************************************ 00:01:41.803 21:05:16 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:41.803 21:05:16 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:41.803 21:05:16 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:41.803 21:05:16 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:41.803 21:05:16 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:41.803 21:05:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.803 ************************************ 00:01:41.803 START TEST build_native_dpdk 00:01:41.803 ************************************ 00:01:41.803 21:05:16 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:41.803 21:05:16 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:41.803 21:05:16 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:41.803 21:05:16 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:41.803 21:05:16 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:41.803 21:05:16 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:41.803 21:05:16 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:41.803 21:05:16 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:41.803 21:05:16 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:41.803 21:05:16 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:41.803 21:05:16 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:41.803 21:05:16 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:41.803 21:05:16 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:41.803 21:05:16 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.803 21:05:16 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.803 21:05:16 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:41.803 21:05:16 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:41.803 21:05:16 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:41.803 caf0f5d395 version: 22.11.4 00:01:41.803 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:41.803 dc9c799c7d vhost: fix missing spinlock unlock 00:01:41.803 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:41.803 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:41.803 21:05:16 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:41.803 21:05:16 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:41.803 21:05:16 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:41.803 21:05:16 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:41.803 21:05:16 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:41.803 21:05:16 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:41.803 21:05:16 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:41.803 21:05:16 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:41.803 21:05:16 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:41.803 21:05:16 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:41.803 21:05:16 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:41.803 21:05:16 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:41.803 21:05:16 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:41.803 21:05:16 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:41.803 21:05:16 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:41.803 21:05:16 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:41.803 21:05:16 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:41.803 21:05:16 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:41.803 21:05:16 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:41.803 21:05:16 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:41.803 21:05:16 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:41.803 21:05:16 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:41.803 21:05:16 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:41.803 21:05:16 -- scripts/common.sh@343 -- $ case "$op" in 00:01:41.803 21:05:16 -- scripts/common.sh@344 -- $ : 1 00:01:41.803 21:05:16 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:41.803 21:05:16 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:41.803 21:05:16 -- scripts/common.sh@364 -- $ decimal 22 00:01:41.803 21:05:16 -- scripts/common.sh@352 -- $ local d=22 00:01:41.803 21:05:16 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:41.803 21:05:16 -- scripts/common.sh@354 -- $ echo 22 00:01:41.803 21:05:16 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:41.803 21:05:16 -- scripts/common.sh@365 -- $ decimal 21 00:01:41.803 21:05:16 -- scripts/common.sh@352 -- $ local d=21 00:01:41.803 21:05:16 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:41.803 21:05:16 -- scripts/common.sh@354 -- $ echo 21 00:01:41.803 21:05:16 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:41.803 21:05:16 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:41.803 21:05:16 -- scripts/common.sh@366 -- $ return 1 00:01:41.803 21:05:16 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:41.803 patching file config/rte_config.h 00:01:41.803 Hunk #1 succeeded at 60 (offset 1 line). 00:01:41.803 21:05:16 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:41.803 21:05:16 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:41.803 21:05:16 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:41.803 21:05:16 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:41.803 21:05:16 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:41.803 21:05:16 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:41.803 21:05:16 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:41.803 21:05:16 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:41.803 21:05:16 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:41.803 21:05:16 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:41.803 21:05:16 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:41.803 21:05:16 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:41.803 21:05:16 -- scripts/common.sh@343 -- $ case "$op" in 00:01:41.803 21:05:16 -- scripts/common.sh@344 -- $ : 1 00:01:41.803 21:05:16 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:41.803 21:05:16 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:41.803 21:05:16 -- scripts/common.sh@364 -- $ decimal 22 00:01:41.803 21:05:16 -- scripts/common.sh@352 -- $ local d=22 00:01:41.803 21:05:16 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:41.803 21:05:16 -- scripts/common.sh@354 -- $ echo 22 00:01:41.803 21:05:16 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:41.803 21:05:16 -- scripts/common.sh@365 -- $ decimal 24 00:01:41.803 21:05:16 -- scripts/common.sh@352 -- $ local d=24 00:01:41.803 21:05:16 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:41.803 21:05:16 -- scripts/common.sh@354 -- $ echo 24 00:01:41.803 21:05:16 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:41.803 21:05:16 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:41.803 21:05:16 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:41.803 21:05:16 -- scripts/common.sh@367 -- $ return 0 00:01:41.803 21:05:16 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:41.803 patching file lib/pcapng/rte_pcapng.c 00:01:41.803 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:41.803 21:05:16 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:41.803 21:05:16 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:41.803 21:05:16 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:41.803 21:05:16 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:41.803 21:05:16 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:46.029 The Meson build system 00:01:46.029 Version: 1.3.1 00:01:46.029 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:46.029 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:46.029 Build type: native build 00:01:46.029 Program cat found: YES (/usr/bin/cat) 00:01:46.029 Project name: DPDK 00:01:46.029 Project version: 22.11.4 00:01:46.029 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.029 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:46.029 Host machine cpu family: x86_64 00:01:46.029 Host machine cpu: x86_64 00:01:46.029 Message: ## Building in Developer Mode ## 00:01:46.030 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.030 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:46.030 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.030 Program objdump found: YES (/usr/bin/objdump) 00:01:46.030 Program python3 found: YES (/usr/bin/python3) 00:01:46.030 Program cat found: YES (/usr/bin/cat) 00:01:46.030 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:46.030 Checking for size of "void *" : 8 00:01:46.030 Checking for size of "void *" : 8 (cached) 00:01:46.030 Library m found: YES 00:01:46.030 Library numa found: YES 00:01:46.030 Has header "numaif.h" : YES 00:01:46.030 Library fdt found: NO 00:01:46.030 Library execinfo found: NO 00:01:46.030 Has header "execinfo.h" : YES 00:01:46.030 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.030 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.030 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.030 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.030 Run-time dependency openssl found: YES 3.0.9 00:01:46.030 Run-time dependency libpcap found: YES 1.10.4 00:01:46.030 Has header "pcap.h" with dependency libpcap: YES 00:01:46.030 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.030 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.030 Compiler for C supports arguments -Wformat: YES 00:01:46.030 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.030 Compiler for C supports arguments -Wformat-security: NO 00:01:46.030 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.030 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.030 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.030 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.030 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.030 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.030 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.030 Compiler for C supports arguments -Wundef: YES 00:01:46.030 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.030 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.030 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.030 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.030 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.030 Compiler for C supports arguments -mavx512f: YES 00:01:46.030 Checking if "AVX512 checking" compiles: YES 00:01:46.030 Fetching value of define "__SSE4_2__" : 1 00:01:46.030 Fetching value of define "__AES__" : 1 00:01:46.030 Fetching value of define "__AVX__" : 1 00:01:46.030 Fetching value of define "__AVX2__" : 1 00:01:46.030 Fetching value of define "__AVX512BW__" : 1 00:01:46.030 Fetching value of define "__AVX512CD__" : 1 00:01:46.030 Fetching value of define "__AVX512DQ__" : 1 00:01:46.030 Fetching value of define "__AVX512F__" : 1 00:01:46.030 Fetching value of define "__AVX512VL__" : 1 00:01:46.030 Fetching value of define "__PCLMUL__" : 1 00:01:46.030 Fetching value of define "__RDRND__" : 1 00:01:46.030 Fetching value of define "__RDSEED__" : 1 00:01:46.030 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.030 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.030 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.030 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.030 Checking for function "getentropy" : YES 00:01:46.030 Message: lib/eal: Defining dependency "eal" 00:01:46.030 Message: lib/ring: Defining dependency "ring" 00:01:46.030 Message: lib/rcu: Defining dependency "rcu" 00:01:46.030 Message: lib/mempool: Defining dependency "mempool" 00:01:46.030 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.030 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.030 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:46.030 Compiler for C supports arguments -mpclmul: YES 00:01:46.030 Compiler for C supports arguments -maes: YES 00:01:46.030 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.030 Compiler for C supports arguments -mavx512bw: YES 00:01:46.030 Compiler for C supports arguments -mavx512dq: YES 00:01:46.030 Compiler for C supports arguments -mavx512vl: YES 00:01:46.030 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.030 Compiler for C supports arguments -mavx2: YES 00:01:46.030 Compiler for C supports arguments -mavx: YES 00:01:46.030 Message: lib/net: Defining dependency "net" 00:01:46.030 Message: lib/meter: Defining dependency "meter" 00:01:46.030 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.030 Message: lib/pci: Defining dependency "pci" 00:01:46.030 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.030 Message: lib/metrics: Defining dependency "metrics" 00:01:46.030 Message: lib/hash: Defining dependency "hash" 00:01:46.030 Message: lib/timer: Defining dependency "timer" 00:01:46.030 Fetching value of define "__AVX2__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.030 Message: lib/acl: Defining dependency "acl" 00:01:46.030 Message: lib/bbdev: Defining dependency "bbdev" 00:01:46.030 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:46.030 Run-time dependency libelf found: YES 0.190 00:01:46.030 Message: lib/bpf: Defining dependency "bpf" 00:01:46.030 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:46.030 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.030 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.030 Message: lib/distributor: Defining dependency "distributor" 00:01:46.030 Message: lib/efd: Defining dependency "efd" 00:01:46.030 Message: lib/eventdev: Defining dependency "eventdev" 00:01:46.030 Message: lib/gpudev: Defining dependency "gpudev" 00:01:46.030 Message: lib/gro: Defining dependency "gro" 00:01:46.030 Message: lib/gso: Defining dependency "gso" 00:01:46.030 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:46.030 Message: lib/jobstats: Defining dependency "jobstats" 00:01:46.030 Message: lib/latencystats: Defining dependency "latencystats" 00:01:46.030 Message: lib/lpm: Defining dependency "lpm" 00:01:46.030 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:46.030 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:46.030 Message: lib/member: Defining dependency "member" 00:01:46.030 Message: lib/pcapng: Defining dependency "pcapng" 00:01:46.030 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.030 Message: lib/power: Defining dependency "power" 00:01:46.030 Message: lib/rawdev: Defining dependency "rawdev" 00:01:46.030 Message: lib/regexdev: Defining dependency "regexdev" 00:01:46.030 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.030 Message: lib/rib: Defining dependency "rib" 00:01:46.030 Message: lib/reorder: Defining dependency "reorder" 00:01:46.030 Message: lib/sched: Defining dependency "sched" 00:01:46.030 Message: lib/security: Defining dependency "security" 00:01:46.030 Message: lib/stack: Defining dependency "stack" 00:01:46.030 Has header "linux/userfaultfd.h" : YES 00:01:46.030 Message: lib/vhost: Defining dependency "vhost" 00:01:46.030 Message: lib/ipsec: Defining dependency "ipsec" 00:01:46.030 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.030 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.030 Message: lib/fib: Defining dependency "fib" 00:01:46.030 Message: lib/port: Defining dependency "port" 00:01:46.030 Message: lib/pdump: Defining dependency "pdump" 00:01:46.030 Message: lib/table: Defining dependency "table" 00:01:46.030 Message: lib/pipeline: Defining dependency "pipeline" 00:01:46.030 Message: lib/graph: Defining dependency "graph" 00:01:46.030 Message: lib/node: Defining dependency "node" 00:01:46.030 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.030 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.030 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.030 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.030 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:46.030 Compiler for C supports arguments -Wno-unused-value: YES 00:01:46.030 Compiler for C supports arguments -Wno-format: YES 00:01:46.030 Compiler for C supports arguments -Wno-format-security: YES 00:01:46.030 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:46.975 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:46.975 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:46.975 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:46.975 Fetching value of define "__AVX2__" : 1 (cached) 00:01:46.975 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.975 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.975 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.975 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:46.975 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:46.975 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:46.975 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.975 Configuring doxy-api.conf using configuration 00:01:46.975 Program sphinx-build found: NO 00:01:46.975 Configuring rte_build_config.h using configuration 00:01:46.975 Message: 00:01:46.975 ================= 00:01:46.975 Applications Enabled 00:01:46.975 ================= 00:01:46.975 00:01:46.975 apps: 00:01:46.975 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:46.975 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:46.975 test-security-perf, 00:01:46.975 00:01:46.975 Message: 00:01:46.975 ================= 00:01:46.975 Libraries Enabled 00:01:46.975 ================= 00:01:46.975 00:01:46.975 libs: 00:01:46.975 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:46.975 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:46.975 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:46.975 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:46.975 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:46.975 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:46.975 table, pipeline, graph, node, 00:01:46.975 00:01:46.975 Message: 00:01:46.975 =============== 00:01:46.975 Drivers Enabled 00:01:46.975 =============== 00:01:46.975 00:01:46.975 common: 00:01:46.975 00:01:46.975 bus: 00:01:46.975 pci, vdev, 00:01:46.975 mempool: 00:01:46.975 ring, 00:01:46.975 dma: 00:01:46.975 00:01:46.975 net: 00:01:46.975 i40e, 00:01:46.975 raw: 00:01:46.975 00:01:46.975 crypto: 00:01:46.975 00:01:46.975 compress: 00:01:46.975 00:01:46.975 regex: 00:01:46.975 00:01:46.975 vdpa: 00:01:46.975 00:01:46.975 event: 00:01:46.975 00:01:46.975 baseband: 00:01:46.975 00:01:46.975 gpu: 00:01:46.975 00:01:46.975 00:01:46.975 Message: 00:01:46.975 ================= 00:01:46.975 Content Skipped 00:01:46.975 ================= 00:01:46.975 00:01:46.975 apps: 00:01:46.975 00:01:46.975 libs: 00:01:46.975 kni: explicitly disabled via build config (deprecated lib) 00:01:46.975 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:46.975 00:01:46.975 drivers: 00:01:46.975 common/cpt: not in enabled drivers build config 00:01:46.975 common/dpaax: not in enabled drivers build config 00:01:46.975 common/iavf: not in enabled drivers build config 00:01:46.975 common/idpf: not in enabled drivers build config 00:01:46.975 common/mvep: not in enabled drivers build config 00:01:46.975 common/octeontx: not in enabled drivers build config 00:01:46.975 bus/auxiliary: not in enabled drivers build config 00:01:46.975 bus/dpaa: not in enabled drivers build config 00:01:46.975 bus/fslmc: not in enabled drivers build config 00:01:46.975 bus/ifpga: not in enabled drivers build config 00:01:46.975 bus/vmbus: not in enabled drivers build config 00:01:46.975 common/cnxk: not in enabled drivers build config 00:01:46.975 common/mlx5: not in enabled drivers build config 00:01:46.975 common/qat: not in enabled drivers build config 00:01:46.975 common/sfc_efx: not in enabled drivers build config 00:01:46.975 mempool/bucket: not in enabled drivers build config 00:01:46.975 mempool/cnxk: not in enabled drivers build config 00:01:46.975 mempool/dpaa: not in enabled drivers build config 00:01:46.975 mempool/dpaa2: not in enabled drivers build config 00:01:46.975 mempool/octeontx: not in enabled drivers build config 00:01:46.975 mempool/stack: not in enabled drivers build config 00:01:46.975 dma/cnxk: not in enabled drivers build config 00:01:46.975 dma/dpaa: not in enabled drivers build config 00:01:46.975 dma/dpaa2: not in enabled drivers build config 00:01:46.975 dma/hisilicon: not in enabled drivers build config 00:01:46.975 dma/idxd: not in enabled drivers build config 00:01:46.975 dma/ioat: not in enabled drivers build config 00:01:46.975 dma/skeleton: not in enabled drivers build config 00:01:46.975 net/af_packet: not in enabled drivers build config 00:01:46.975 net/af_xdp: not in enabled drivers build config 00:01:46.975 net/ark: not in enabled drivers build config 00:01:46.975 net/atlantic: not in enabled drivers build config 00:01:46.975 net/avp: not in enabled drivers build config 00:01:46.975 net/axgbe: not in enabled drivers build config 00:01:46.975 net/bnx2x: not in enabled drivers build config 00:01:46.975 net/bnxt: not in enabled drivers build config 00:01:46.975 net/bonding: not in enabled drivers build config 00:01:46.975 net/cnxk: not in enabled drivers build config 00:01:46.975 net/cxgbe: not in enabled drivers build config 00:01:46.975 net/dpaa: not in enabled drivers build config 00:01:46.975 net/dpaa2: not in enabled drivers build config 00:01:46.975 net/e1000: not in enabled drivers build config 00:01:46.975 net/ena: not in enabled drivers build config 00:01:46.975 net/enetc: not in enabled drivers build config 00:01:46.975 net/enetfec: not in enabled drivers build config 00:01:46.975 net/enic: not in enabled drivers build config 00:01:46.975 net/failsafe: not in enabled drivers build config 00:01:46.975 net/fm10k: not in enabled drivers build config 00:01:46.975 net/gve: not in enabled drivers build config 00:01:46.975 net/hinic: not in enabled drivers build config 00:01:46.975 net/hns3: not in enabled drivers build config 00:01:46.975 net/iavf: not in enabled drivers build config 00:01:46.975 net/ice: not in enabled drivers build config 00:01:46.975 net/idpf: not in enabled drivers build config 00:01:46.975 net/igc: not in enabled drivers build config 00:01:46.975 net/ionic: not in enabled drivers build config 00:01:46.975 net/ipn3ke: not in enabled drivers build config 00:01:46.975 net/ixgbe: not in enabled drivers build config 00:01:46.975 net/kni: not in enabled drivers build config 00:01:46.975 net/liquidio: not in enabled drivers build config 00:01:46.975 net/mana: not in enabled drivers build config 00:01:46.975 net/memif: not in enabled drivers build config 00:01:46.975 net/mlx4: not in enabled drivers build config 00:01:46.975 net/mlx5: not in enabled drivers build config 00:01:46.975 net/mvneta: not in enabled drivers build config 00:01:46.975 net/mvpp2: not in enabled drivers build config 00:01:46.975 net/netvsc: not in enabled drivers build config 00:01:46.975 net/nfb: not in enabled drivers build config 00:01:46.975 net/nfp: not in enabled drivers build config 00:01:46.975 net/ngbe: not in enabled drivers build config 00:01:46.975 net/null: not in enabled drivers build config 00:01:46.975 net/octeontx: not in enabled drivers build config 00:01:46.975 net/octeon_ep: not in enabled drivers build config 00:01:46.975 net/pcap: not in enabled drivers build config 00:01:46.975 net/pfe: not in enabled drivers build config 00:01:46.975 net/qede: not in enabled drivers build config 00:01:46.975 net/ring: not in enabled drivers build config 00:01:46.975 net/sfc: not in enabled drivers build config 00:01:46.975 net/softnic: not in enabled drivers build config 00:01:46.975 net/tap: not in enabled drivers build config 00:01:46.975 net/thunderx: not in enabled drivers build config 00:01:46.975 net/txgbe: not in enabled drivers build config 00:01:46.975 net/vdev_netvsc: not in enabled drivers build config 00:01:46.975 net/vhost: not in enabled drivers build config 00:01:46.975 net/virtio: not in enabled drivers build config 00:01:46.975 net/vmxnet3: not in enabled drivers build config 00:01:46.975 raw/cnxk_bphy: not in enabled drivers build config 00:01:46.975 raw/cnxk_gpio: not in enabled drivers build config 00:01:46.976 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:46.976 raw/ifpga: not in enabled drivers build config 00:01:46.976 raw/ntb: not in enabled drivers build config 00:01:46.976 raw/skeleton: not in enabled drivers build config 00:01:46.976 crypto/armv8: not in enabled drivers build config 00:01:46.976 crypto/bcmfs: not in enabled drivers build config 00:01:46.976 crypto/caam_jr: not in enabled drivers build config 00:01:46.976 crypto/ccp: not in enabled drivers build config 00:01:46.976 crypto/cnxk: not in enabled drivers build config 00:01:46.976 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.976 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.976 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.976 crypto/mlx5: not in enabled drivers build config 00:01:46.976 crypto/mvsam: not in enabled drivers build config 00:01:46.976 crypto/nitrox: not in enabled drivers build config 00:01:46.976 crypto/null: not in enabled drivers build config 00:01:46.976 crypto/octeontx: not in enabled drivers build config 00:01:46.976 crypto/openssl: not in enabled drivers build config 00:01:46.976 crypto/scheduler: not in enabled drivers build config 00:01:46.976 crypto/uadk: not in enabled drivers build config 00:01:46.976 crypto/virtio: not in enabled drivers build config 00:01:46.976 compress/isal: not in enabled drivers build config 00:01:46.976 compress/mlx5: not in enabled drivers build config 00:01:46.976 compress/octeontx: not in enabled drivers build config 00:01:46.976 compress/zlib: not in enabled drivers build config 00:01:46.976 regex/mlx5: not in enabled drivers build config 00:01:46.976 regex/cn9k: not in enabled drivers build config 00:01:46.976 vdpa/ifc: not in enabled drivers build config 00:01:46.976 vdpa/mlx5: not in enabled drivers build config 00:01:46.976 vdpa/sfc: not in enabled drivers build config 00:01:46.976 event/cnxk: not in enabled drivers build config 00:01:46.976 event/dlb2: not in enabled drivers build config 00:01:46.976 event/dpaa: not in enabled drivers build config 00:01:46.976 event/dpaa2: not in enabled drivers build config 00:01:46.976 event/dsw: not in enabled drivers build config 00:01:46.976 event/opdl: not in enabled drivers build config 00:01:46.976 event/skeleton: not in enabled drivers build config 00:01:46.976 event/sw: not in enabled drivers build config 00:01:46.976 event/octeontx: not in enabled drivers build config 00:01:46.976 baseband/acc: not in enabled drivers build config 00:01:46.976 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:46.976 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:46.976 baseband/la12xx: not in enabled drivers build config 00:01:46.976 baseband/null: not in enabled drivers build config 00:01:46.976 baseband/turbo_sw: not in enabled drivers build config 00:01:46.976 gpu/cuda: not in enabled drivers build config 00:01:46.976 00:01:46.976 00:01:46.976 Build targets in project: 311 00:01:46.976 00:01:46.976 DPDK 22.11.4 00:01:46.976 00:01:46.976 User defined options 00:01:46.976 libdir : lib 00:01:46.976 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:46.976 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:46.976 c_link_args : 00:01:46.976 enable_docs : false 00:01:46.976 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:46.976 enable_kmods : false 00:01:46.976 machine : native 00:01:46.976 tests : false 00:01:46.976 00:01:46.976 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.976 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:46.976 21:05:21 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:01:46.976 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:46.976 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:46.976 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:46.976 [3/740] Generating lib/rte_telemetry_def with a custom command 00:01:46.976 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:46.976 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.976 [6/740] Generating lib/rte_ring_mingw with a custom command 00:01:46.976 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.976 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.976 [9/740] Generating lib/rte_eal_def with a custom command 00:01:46.976 [10/740] Generating lib/rte_eal_mingw with a custom command 00:01:46.976 [11/740] Generating lib/rte_ring_def with a custom command 00:01:46.976 [12/740] Generating lib/rte_rcu_def with a custom command 00:01:46.976 [13/740] Generating lib/rte_rcu_mingw with a custom command 00:01:46.976 [14/740] Generating lib/rte_mempool_def with a custom command 00:01:46.976 [15/740] Generating lib/rte_mempool_mingw with a custom command 00:01:46.976 [16/740] Generating lib/rte_mbuf_def with a custom command 00:01:46.976 [17/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:46.976 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.976 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.976 [20/740] Generating lib/rte_net_mingw with a custom command 00:01:46.976 [21/740] Generating lib/rte_net_def with a custom command 00:01:46.976 [22/740] Generating lib/rte_meter_def with a custom command 00:01:46.976 [23/740] Generating lib/rte_meter_mingw with a custom command 00:01:46.976 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.242 [25/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.242 [26/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.242 [27/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:47.242 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.242 [29/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.242 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.242 [31/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.242 [32/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:47.242 [33/740] Generating lib/rte_ethdev_def with a custom command 00:01:47.242 [34/740] Generating lib/rte_pci_mingw with a custom command 00:01:47.242 [35/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.242 [36/740] Generating lib/rte_pci_def with a custom command 00:01:47.242 [37/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.242 [38/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.242 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.242 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.242 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.242 [42/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.242 [43/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.242 [44/740] Linking static target lib/librte_kvargs.a 00:01:47.242 [45/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.242 [46/740] Generating lib/rte_cmdline_def with a custom command 00:01:47.242 [47/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:47.242 [48/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.242 [49/740] Generating lib/rte_metrics_def with a custom command 00:01:47.242 [50/740] Generating lib/rte_metrics_mingw with a custom command 00:01:47.242 [51/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.242 [52/740] Generating lib/rte_hash_mingw with a custom command 00:01:47.242 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.242 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.242 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.242 [56/740] Generating lib/rte_hash_def with a custom command 00:01:47.242 [57/740] Generating lib/rte_timer_def with a custom command 00:01:47.242 [58/740] Generating lib/rte_timer_mingw with a custom command 00:01:47.242 [59/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.242 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.242 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.242 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.242 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.242 [64/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.242 [65/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.242 [66/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.242 [67/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.242 [68/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.242 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.242 [70/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.242 [71/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.242 [72/740] Generating lib/rte_acl_mingw with a custom command 00:01:47.242 [73/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:47.242 [74/740] Generating lib/rte_acl_def with a custom command 00:01:47.242 [75/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.242 [76/740] Generating lib/rte_bbdev_def with a custom command 00:01:47.242 [77/740] Generating lib/rte_bitratestats_def with a custom command 00:01:47.242 [78/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.242 [79/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:47.242 [80/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.242 [81/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.242 [82/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.242 [83/740] Generating lib/rte_bpf_mingw with a custom command 00:01:47.242 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.242 [85/740] Generating lib/rte_bpf_def with a custom command 00:01:47.242 [86/740] Linking static target lib/librte_pci.a 00:01:47.242 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.242 [88/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.242 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.242 [90/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.242 [91/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.242 [92/740] Generating lib/rte_cfgfile_def with a custom command 00:01:47.242 [93/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.242 [94/740] Linking static target lib/librte_meter.a 00:01:47.242 [95/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.242 [96/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:47.242 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.242 [98/740] Generating lib/rte_compressdev_def with a custom command 00:01:47.242 [99/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:47.242 [100/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.242 [101/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.505 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:47.505 [103/740] Linking static target lib/librte_ring.a 00:01:47.505 [104/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.505 [105/740] Generating lib/rte_cryptodev_def with a custom command 00:01:47.505 [106/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:47.505 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.505 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.505 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.505 [110/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.505 [111/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.505 [112/740] Generating lib/rte_efd_def with a custom command 00:01:47.505 [113/740] Generating lib/rte_distributor_mingw with a custom command 00:01:47.505 [114/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.505 [115/740] Generating lib/rte_distributor_def with a custom command 00:01:47.505 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.505 [117/740] Generating lib/rte_efd_mingw with a custom command 00:01:47.505 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.505 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.505 [120/740] Generating lib/rte_eventdev_def with a custom command 00:01:47.505 [121/740] Generating lib/rte_gpudev_def with a custom command 00:01:47.505 [122/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:47.505 [123/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.505 [124/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:47.505 [125/740] Generating lib/rte_gro_mingw with a custom command 00:01:47.505 [126/740] Generating lib/rte_gro_def with a custom command 00:01:47.505 [127/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.505 [128/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.505 [129/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:47.505 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.505 [131/740] Generating lib/rte_gso_def with a custom command 00:01:47.505 [132/740] Generating lib/rte_gso_mingw with a custom command 00:01:47.505 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.505 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.505 [135/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.765 [136/740] Generating lib/rte_ip_frag_def with a custom command 00:01:47.765 [137/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:47.765 [138/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.765 [139/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.765 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.765 [141/740] Generating lib/rte_jobstats_def with a custom command 00:01:47.765 [142/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.765 [143/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:47.765 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.765 [145/740] Linking target lib/librte_kvargs.so.23.0 00:01:47.765 [146/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:47.765 [147/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.765 [148/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.765 [149/740] Generating lib/rte_latencystats_def with a custom command 00:01:47.765 [150/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:47.765 [151/740] Linking static target lib/librte_cfgfile.a 00:01:47.765 [152/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.765 [153/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.765 [154/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.765 [155/740] Generating lib/rte_lpm_def with a custom command 00:01:47.765 [156/740] Generating lib/rte_lpm_mingw with a custom command 00:01:47.765 [157/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.765 [158/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.765 [159/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.765 [160/740] Generating lib/rte_member_def with a custom command 00:01:47.765 [161/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.765 [162/740] Generating lib/rte_pcapng_def with a custom command 00:01:47.765 [163/740] Generating lib/rte_member_mingw with a custom command 00:01:47.765 [164/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.765 [165/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:47.765 [166/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:47.765 [167/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.765 [168/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:47.765 [169/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.765 [170/740] Linking static target lib/librte_jobstats.a 00:01:47.765 [171/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.765 [172/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.765 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.765 [174/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.765 [175/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.765 [176/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.029 [177/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.029 [178/740] Linking static target lib/librte_timer.a 00:01:48.029 [179/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.029 [180/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.029 [181/740] Linking static target lib/librte_telemetry.a 00:01:48.029 [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.029 [183/740] Generating lib/rte_power_def with a custom command 00:01:48.029 [184/740] Generating lib/rte_power_mingw with a custom command 00:01:48.029 [185/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.029 [186/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.029 [187/740] Linking static target lib/librte_cmdline.a 00:01:48.029 [188/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.029 [189/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.029 [190/740] Generating lib/rte_rawdev_def with a custom command 00:01:48.029 [191/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:48.029 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:48.029 [193/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:48.029 [194/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:48.029 [195/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:48.029 [196/740] Linking static target lib/librte_metrics.a 00:01:48.029 [197/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:48.029 [198/740] Generating lib/rte_regexdev_def with a custom command 00:01:48.029 [199/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.029 [200/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.030 [201/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:48.030 [202/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.030 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:48.030 [204/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.030 [205/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.030 [206/740] Generating lib/rte_dmadev_def with a custom command 00:01:48.030 [207/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:48.030 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:48.030 [209/740] Generating lib/rte_rib_def with a custom command 00:01:48.030 [210/740] Generating lib/rte_rib_mingw with a custom command 00:01:48.030 [211/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.030 [212/740] Generating lib/rte_reorder_def with a custom command 00:01:48.030 [213/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:48.030 [214/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.030 [215/740] Generating lib/rte_reorder_mingw with a custom command 00:01:48.030 [216/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.030 [217/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.030 [218/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:48.030 [219/740] Generating lib/rte_sched_mingw with a custom command 00:01:48.030 [220/740] Generating lib/rte_sched_def with a custom command 00:01:48.030 [221/740] Linking static target lib/librte_net.a 00:01:48.030 [222/740] Linking static target lib/librte_bitratestats.a 00:01:48.030 [223/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.030 [224/740] Generating lib/rte_security_def with a custom command 00:01:48.030 [225/740] Generating lib/rte_security_mingw with a custom command 00:01:48.030 [226/740] Generating lib/rte_stack_def with a custom command 00:01:48.030 [227/740] Generating lib/rte_stack_mingw with a custom command 00:01:48.030 [228/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:48.030 [229/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.030 [230/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.030 [231/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.030 [232/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:48.030 [233/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.030 [234/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.030 [235/740] Generating lib/rte_vhost_mingw with a custom command 00:01:48.030 [236/740] Generating lib/rte_vhost_def with a custom command 00:01:48.030 [237/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.030 [238/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:48.030 [239/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.030 [240/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:48.030 [241/740] Generating lib/rte_ipsec_def with a custom command 00:01:48.030 [242/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:48.030 [243/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:48.030 [244/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:48.030 [245/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.030 [246/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:48.293 [247/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:48.293 [248/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:48.293 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:48.293 [250/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:48.293 [251/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:48.293 [252/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:48.293 [253/740] Generating lib/rte_fib_mingw with a custom command 00:01:48.293 [254/740] Linking static target lib/librte_stack.a 00:01:48.293 [255/740] Generating lib/rte_fib_def with a custom command 00:01:48.293 [256/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:48.293 [257/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.293 [258/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:48.293 [259/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:48.293 [260/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:48.293 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:48.293 [262/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:48.293 [263/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:48.293 [264/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.293 [265/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.293 [266/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.293 [267/740] Linking static target lib/librte_compressdev.a 00:01:48.293 [268/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:48.293 [269/740] Generating lib/rte_port_def with a custom command 00:01:48.293 [270/740] Generating lib/rte_port_mingw with a custom command 00:01:48.293 [271/740] Generating lib/rte_pdump_def with a custom command 00:01:48.293 [272/740] Generating lib/rte_pdump_mingw with a custom command 00:01:48.293 [273/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.293 [274/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.293 [275/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:48.293 [276/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:48.293 [277/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:48.293 [278/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.293 [279/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:48.293 [280/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.293 [281/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.293 [282/740] Linking static target lib/librte_rcu.a 00:01:48.293 [283/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:48.293 [284/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:48.293 [285/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:48.293 [286/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.293 [287/740] Linking static target lib/librte_rawdev.a 00:01:48.562 [288/740] Linking static target lib/librte_mempool.a 00:01:48.562 [289/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:48.562 [290/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:48.562 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:48.562 [292/740] Linking static target lib/librte_bbdev.a 00:01:48.562 [293/740] Linking static target lib/librte_gro.a 00:01:48.562 [294/740] Generating lib/rte_table_def with a custom command 00:01:48.562 [295/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:48.562 [296/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.562 [297/740] Generating lib/rte_table_mingw with a custom command 00:01:48.562 [298/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.562 [299/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:48.562 [300/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:48.562 [301/740] Linking static target lib/librte_dmadev.a 00:01:48.562 [302/740] Linking static target lib/librte_gpudev.a 00:01:48.562 [303/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.562 [304/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.562 [305/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:48.562 [306/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:48.562 [307/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:48.562 [308/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.562 [309/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.562 [310/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:48.562 [311/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:48.562 [312/740] Generating lib/rte_pipeline_def with a custom command 00:01:48.562 [313/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:48.562 [314/740] Linking static target lib/librte_gso.a 00:01:48.562 [315/740] Linking static target lib/librte_latencystats.a 00:01:48.562 [316/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:48.562 [317/740] Linking target lib/librte_telemetry.so.23.0 00:01:48.562 [318/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:48.562 [319/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:48.562 [320/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.562 [321/740] Generating lib/rte_graph_def with a custom command 00:01:48.562 [322/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:48.562 [323/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:48.562 [324/740] Linking static target lib/librte_distributor.a 00:01:48.562 [325/740] Generating lib/rte_graph_mingw with a custom command 00:01:48.562 [326/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.562 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:48.562 [328/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:48.562 [329/740] Linking static target lib/librte_ip_frag.a 00:01:48.822 [330/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:48.822 [331/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.822 [332/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:48.822 [333/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:48.822 [334/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:48.822 [335/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:48.822 [336/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.823 [337/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:48.823 [338/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.823 [339/740] Linking static target lib/librte_regexdev.a 00:01:48.823 [340/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:48.823 [341/740] Generating lib/rte_node_def with a custom command 00:01:48.823 [342/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:48.823 [343/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:48.823 [344/740] Generating lib/rte_node_mingw with a custom command 00:01:48.823 [345/740] Linking static target lib/librte_eal.a 00:01:48.823 [346/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.823 [347/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.823 [348/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:48.823 [349/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:48.823 [350/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:48.823 [351/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.823 [352/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.823 [353/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:48.823 [354/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:48.823 [355/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:48.823 [356/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.823 [357/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:48.823 [358/740] Linking static target lib/librte_power.a 00:01:48.823 [359/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.823 [360/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.823 [361/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.823 [362/740] Linking static target lib/librte_reorder.a 00:01:48.823 [363/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:48.823 [364/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.823 [365/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:48.823 [366/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:48.823 [367/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:48.823 [368/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.084 [369/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:49.084 [370/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.084 [371/740] Linking static target lib/librte_pcapng.a 00:01:49.084 [372/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:49.084 [373/740] Linking static target lib/librte_security.a 00:01:49.084 [374/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:49.084 [375/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.084 [376/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:49.084 [377/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:49.084 [378/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:49.084 [379/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:49.084 [380/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:49.084 [381/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:49.084 [382/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:49.084 [383/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.084 [384/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:49.084 [385/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.084 [386/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:49.084 [387/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:49.084 [388/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.084 [389/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:49.084 [390/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:49.084 [391/740] Linking static target lib/librte_bpf.a 00:01:49.084 [392/740] Linking static target lib/librte_mbuf.a 00:01:49.084 [393/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:49.084 [394/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:49.084 [395/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.348 [396/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:49.348 [397/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:49.348 [398/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:49.348 [399/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:49.348 [400/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:49.348 [401/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:49.348 [402/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:49.348 [403/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:49.348 [404/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:49.348 [405/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:49.348 [406/740] Linking static target lib/librte_lpm.a 00:01:49.348 [407/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:49.348 [408/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:49.348 [409/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:49.348 [410/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:49.348 [411/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.348 [412/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:49.348 [413/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:49.348 [414/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:49.348 [415/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:49.348 [416/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:49.348 [417/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.348 [418/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:49.348 [419/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:49.348 [420/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.348 [421/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:49.348 [422/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:49.348 [423/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:49.348 [424/740] Linking static target lib/librte_rib.a 00:01:49.348 [425/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:49.348 [426/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:49.348 [427/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:49.348 [428/740] Linking static target lib/librte_graph.a 00:01:49.348 [429/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.348 [430/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:49.348 [431/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:49.348 [432/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:49.348 [433/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.348 [434/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:49.348 [435/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:49.612 [436/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:49.612 [437/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:49.612 [438/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.612 [439/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:49.612 [440/740] Linking static target lib/librte_efd.a 00:01:49.612 [441/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:49.612 [442/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:49.612 [443/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:49.612 [444/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:49.612 [445/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.612 [446/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:49.612 [447/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:49.612 [448/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:49.612 [449/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.612 [450/740] Linking static target drivers/librte_bus_vdev.a 00:01:49.612 [451/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.612 [452/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:49.612 [453/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.612 [454/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:49.612 [455/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:49.612 [456/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:49.612 [457/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.877 [458/740] Linking static target lib/librte_fib.a 00:01:49.877 [459/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.877 [460/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:49.877 [461/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:49.877 [462/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.877 [463/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:49.877 [464/740] Linking static target lib/librte_pdump.a 00:01:49.877 [465/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:49.877 [466/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.877 [467/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.877 [468/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:49.877 [469/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.877 [470/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:49.877 [471/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:49.877 [472/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.138 [473/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.138 [474/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.138 [475/740] Linking static target drivers/librte_bus_pci.a 00:01:50.138 [476/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.138 [477/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.138 [478/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:50.138 [479/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:50.138 [480/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:50.139 [481/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.139 [482/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:50.139 [483/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.139 [484/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:50.139 [485/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:50.139 [486/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.139 [487/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.139 [488/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:50.139 [489/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:50.139 [490/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:50.139 [491/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:50.139 [492/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:50.139 [493/740] Linking static target lib/librte_table.a 00:01:50.139 [494/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:50.139 [495/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:50.398 [496/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:50.398 [497/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:50.398 [498/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:50.398 [499/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:50.398 [500/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:50.398 [501/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:50.398 [502/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:50.398 [503/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.398 [504/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:50.398 [505/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.398 [506/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:50.398 [507/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:50.398 [508/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:50.398 [509/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:50.398 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:50.398 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:50.398 [512/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:50.398 [513/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.398 [514/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:50.398 [515/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:50.398 [516/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:50.398 [517/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:50.398 [518/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.398 [519/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.398 [520/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:50.398 [521/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:50.398 [522/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:50.398 [523/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:50.398 [524/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.656 [525/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:50.656 [526/740] Linking static target lib/librte_cryptodev.a 00:01:50.656 [527/740] Linking static target lib/librte_sched.a 00:01:50.656 [528/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:50.656 [529/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.656 [530/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:50.656 [531/740] Linking static target lib/librte_node.a 00:01:50.656 [532/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:50.656 [533/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:50.656 [534/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:50.657 [535/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:50.657 [536/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:50.657 [537/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:50.657 [538/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:50.657 [539/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:50.657 [540/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:50.657 [541/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.657 [542/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.657 [543/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.657 [544/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:50.657 [545/740] Linking static target drivers/librte_mempool_ring.a 00:01:50.657 [546/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.657 [547/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:50.657 [548/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:50.657 [549/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:50.657 [550/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.657 [551/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:50.657 [552/740] Linking static target lib/librte_ipsec.a 00:01:50.915 [553/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:50.915 [554/740] Linking static target lib/librte_ethdev.a 00:01:50.915 [555/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:50.915 [556/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:50.915 [557/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.915 [558/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:50.915 [559/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:50.915 [560/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:50.915 [561/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:50.915 [562/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:50.915 [563/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:50.915 [564/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:50.915 [565/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:50.915 [566/740] Linking static target lib/librte_member.a 00:01:50.915 [567/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:50.915 [568/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:50.915 [569/740] Linking static target lib/librte_port.a 00:01:50.915 [570/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:50.915 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:50.915 [572/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:50.915 [573/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:50.915 [574/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:50.915 [575/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:50.915 [576/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:50.915 [577/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:50.915 [578/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:50.915 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:51.173 [580/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:51.173 [581/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:51.173 [582/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:51.173 [583/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.173 [584/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:51.173 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:51.173 [586/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:51.173 [587/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:51.173 [588/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.173 [589/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.173 [590/740] Linking static target lib/librte_hash.a 00:01:51.173 [591/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:51.173 [592/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:51.173 [593/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:51.173 [594/740] Linking static target lib/librte_eventdev.a 00:01:51.173 [595/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.432 [596/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:51.432 [597/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:51.432 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:51.432 [599/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:51.432 [600/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:51.432 [601/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:51.432 [602/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.432 [603/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:51.432 [604/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:51.432 [605/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:51.432 [606/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:51.432 [607/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:51.690 [608/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:51.690 [609/740] Linking static target lib/librte_acl.a 00:01:51.690 [610/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:51.690 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:51.690 [612/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.949 [613/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:51.949 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:52.208 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:52.208 [616/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.208 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:52.208 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.774 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:52.774 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:52.774 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:53.341 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:53.341 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:53.600 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:53.600 [625/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:53.600 [626/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:53.600 [627/740] Linking static target drivers/librte_net_i40e.a 00:01:54.168 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.168 [629/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.168 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:54.428 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:54.428 [632/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.687 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.972 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.233 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.233 [636/740] Linking static target lib/librte_vhost.a 00:02:01.221 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:01.221 [638/740] Linking static target lib/librte_pipeline.a 00:02:01.480 [639/740] Linking target app/dpdk-test-acl 00:02:01.480 [640/740] Linking target app/dpdk-proc-info 00:02:01.480 [641/740] Linking target app/dpdk-dumpcap 00:02:01.740 [642/740] Linking target app/dpdk-test-crypto-perf 00:02:01.741 [643/740] Linking target app/dpdk-test-pipeline 00:02:01.741 [644/740] Linking target app/dpdk-test-sad 00:02:01.741 [645/740] Linking target app/dpdk-pdump 00:02:01.741 [646/740] Linking target app/dpdk-test-cmdline 00:02:01.741 [647/740] Linking target app/dpdk-test-fib 00:02:01.741 [648/740] Linking target app/dpdk-test-compress-perf 00:02:01.741 [649/740] Linking target app/dpdk-test-bbdev 00:02:01.741 [650/740] Linking target app/dpdk-test-eventdev 00:02:01.741 [651/740] Linking target app/dpdk-test-gpudev 00:02:01.741 [652/740] Linking target app/dpdk-test-regex 00:02:01.741 [653/740] Linking target app/dpdk-test-flow-perf 00:02:01.741 [654/740] Linking target app/dpdk-test-security-perf 00:02:01.741 [655/740] Linking target app/dpdk-testpmd 00:02:02.309 [656/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.309 [657/740] Linking target lib/librte_eal.so.23.0 00:02:02.309 [658/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.568 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:02.568 [660/740] Linking target lib/librte_meter.so.23.0 00:02:02.568 [661/740] Linking target lib/librte_rawdev.so.23.0 00:02:02.568 [662/740] Linking target lib/librte_timer.so.23.0 00:02:02.568 [663/740] Linking target lib/librte_ring.so.23.0 00:02:02.568 [664/740] Linking target lib/librte_pci.so.23.0 00:02:02.568 [665/740] Linking target lib/librte_cfgfile.so.23.0 00:02:02.568 [666/740] Linking target lib/librte_jobstats.so.23.0 00:02:02.568 [667/740] Linking target lib/librte_dmadev.so.23.0 00:02:02.568 [668/740] Linking target lib/librte_stack.so.23.0 00:02:02.568 [669/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:02.568 [670/740] Linking target lib/librte_graph.so.23.0 00:02:02.568 [671/740] Linking target lib/librte_acl.so.23.0 00:02:02.828 [672/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:02.828 [673/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:02.828 [674/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:02.828 [675/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:02.828 [676/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:02.828 [677/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:02.828 [678/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:02.828 [679/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:02.828 [680/740] Linking target lib/librte_rcu.so.23.0 00:02:02.828 [681/740] Linking target lib/librte_mempool.so.23.0 00:02:02.828 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:02.828 [683/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:02.828 [684/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:02.828 [685/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:03.087 [686/740] Linking target lib/librte_rib.so.23.0 00:02:03.087 [687/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:03.087 [688/740] Linking target lib/librte_mbuf.so.23.0 00:02:03.087 [689/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:03.087 [690/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:03.087 [691/740] Linking target lib/librte_fib.so.23.0 00:02:03.087 [692/740] Linking target lib/librte_distributor.so.23.0 00:02:03.087 [693/740] Linking target lib/librte_bbdev.so.23.0 00:02:03.087 [694/740] Linking target lib/librte_net.so.23.0 00:02:03.087 [695/740] Linking target lib/librte_gpudev.so.23.0 00:02:03.087 [696/740] Linking target lib/librte_compressdev.so.23.0 00:02:03.087 [697/740] Linking target lib/librte_reorder.so.23.0 00:02:03.087 [698/740] Linking target lib/librte_cryptodev.so.23.0 00:02:03.087 [699/740] Linking target lib/librte_regexdev.so.23.0 00:02:03.087 [700/740] Linking target lib/librte_sched.so.23.0 00:02:03.347 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:03.347 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:03.347 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:03.347 [704/740] Linking target lib/librte_hash.so.23.0 00:02:03.347 [705/740] Linking target lib/librte_cmdline.so.23.0 00:02:03.347 [706/740] Linking target lib/librte_ethdev.so.23.0 00:02:03.347 [707/740] Linking target lib/librte_security.so.23.0 00:02:03.607 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:03.607 [709/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:03.607 [710/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:03.607 [711/740] Linking target lib/librte_efd.so.23.0 00:02:03.607 [712/740] Linking target lib/librte_lpm.so.23.0 00:02:03.607 [713/740] Linking target lib/librte_member.so.23.0 00:02:03.607 [714/740] Linking target lib/librte_power.so.23.0 00:02:03.607 [715/740] Linking target lib/librte_metrics.so.23.0 00:02:03.607 [716/740] Linking target lib/librte_gro.so.23.0 00:02:03.607 [717/740] Linking target lib/librte_bpf.so.23.0 00:02:03.607 [718/740] Linking target lib/librte_gso.so.23.0 00:02:03.607 [719/740] Linking target lib/librte_pcapng.so.23.0 00:02:03.607 [720/740] Linking target lib/librte_ip_frag.so.23.0 00:02:03.607 [721/740] Linking target lib/librte_eventdev.so.23.0 00:02:03.607 [722/740] Linking target lib/librte_ipsec.so.23.0 00:02:03.607 [723/740] Linking target lib/librte_vhost.so.23.0 00:02:03.607 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:03.607 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:03.866 [726/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:03.866 [727/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:03.866 [728/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:03.866 [729/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:03.866 [730/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:03.866 [731/740] Linking target lib/librte_node.so.23.0 00:02:03.866 [732/740] Linking target lib/librte_latencystats.so.23.0 00:02:03.866 [733/740] Linking target lib/librte_bitratestats.so.23.0 00:02:03.866 [734/740] Linking target lib/librte_pdump.so.23.0 00:02:03.866 [735/740] Linking target lib/librte_port.so.23.0 00:02:03.866 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:04.126 [737/740] Linking target lib/librte_table.so.23.0 00:02:04.126 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:06.667 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.668 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:06.668 21:05:41 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:06.668 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:06.668 [0/1] Installing files. 00:02:06.668 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:06.668 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.669 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:06.670 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.671 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.672 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.673 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:06.674 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:06.674 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.674 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.938 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.938 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.938 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.938 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.938 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.938 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.939 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.940 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.941 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:06.942 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:06.942 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:06.942 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:06.942 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:06.942 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:06.942 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:06.942 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:06.942 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:06.942 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:06.942 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:06.942 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:06.942 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:06.942 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:06.942 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:06.942 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:06.942 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:06.942 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:06.942 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:06.942 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:06.942 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:06.942 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:06.942 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:06.942 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:06.942 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:06.942 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:06.942 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:06.942 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:06.942 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:06.942 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:06.942 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:06.942 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:06.942 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:06.942 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:06.942 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:06.942 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:06.942 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:06.942 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:06.942 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:06.942 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:06.942 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:06.942 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:06.942 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:06.943 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:06.943 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:06.943 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:06.943 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:06.943 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:06.943 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:06.943 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:06.943 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:06.943 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:06.943 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:06.943 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:06.943 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:06.943 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:06.943 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:06.943 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:06.943 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:06.943 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:06.943 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:06.943 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:06.943 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:06.943 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:06.943 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:06.943 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:06.943 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:06.943 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:06.943 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:06.943 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:06.943 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:06.943 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:06.943 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:06.943 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:06.943 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:06.943 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:06.943 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:06.943 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:06.943 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:06.943 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:06.943 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:06.943 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:06.943 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:06.943 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:06.943 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:06.943 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:06.943 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:06.943 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:06.943 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:06.943 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:06.943 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:06.943 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:06.943 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:06.943 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:06.943 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:06.943 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:06.943 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:06.943 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:06.943 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:06.943 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:06.943 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:06.943 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:06.943 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:06.943 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:06.943 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:06.943 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:06.943 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:06.943 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:06.943 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:06.943 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:06.943 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:06.943 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:06.943 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:06.943 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:06.943 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:06.943 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:06.943 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:06.943 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:06.943 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:06.943 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:06.943 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:06.943 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:06.943 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:06.943 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:06.943 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:06.943 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:06.943 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:07.202 21:05:41 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:07.202 21:05:41 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:07.202 21:05:41 -- common/autobuild_common.sh@203 -- $ cat 00:02:07.202 21:05:41 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:07.202 00:02:07.202 real 0m25.389s 00:02:07.202 user 6m35.085s 00:02:07.202 sys 2m21.894s 00:02:07.203 21:05:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:07.203 21:05:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.203 ************************************ 00:02:07.203 END TEST build_native_dpdk 00:02:07.203 ************************************ 00:02:07.203 21:05:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:07.203 21:05:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:07.203 21:05:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:07.203 21:05:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:07.203 21:05:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:07.203 21:05:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:07.203 21:05:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:07.203 21:05:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:07.203 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:07.461 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:07.461 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:07.461 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:07.719 Using 'verbs' RDMA provider 00:02:20.861 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:35.821 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:35.821 Creating mk/config.mk...done. 00:02:35.821 Creating mk/cc.flags.mk...done. 00:02:35.821 Type 'make' to build. 00:02:35.821 21:06:09 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:35.821 21:06:09 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:35.821 21:06:09 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:35.821 21:06:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.821 ************************************ 00:02:35.821 START TEST make 00:02:35.821 ************************************ 00:02:35.821 21:06:09 -- common/autotest_common.sh@1104 -- $ make -j112 00:02:35.821 make[1]: Nothing to be done for 'all'. 00:02:45.776 CC lib/ut/ut.o 00:02:45.776 CC lib/ut_mock/mock.o 00:02:45.776 CC lib/log/log.o 00:02:45.776 CC lib/log/log_deprecated.o 00:02:45.776 CC lib/log/log_flags.o 00:02:45.776 LIB libspdk_ut.a 00:02:45.776 LIB libspdk_ut_mock.a 00:02:45.776 SO libspdk_ut.so.1.0 00:02:45.776 LIB libspdk_log.a 00:02:45.776 SO libspdk_ut_mock.so.5.0 00:02:45.776 SO libspdk_log.so.6.1 00:02:45.776 SYMLINK libspdk_ut.so 00:02:45.776 SYMLINK libspdk_ut_mock.so 00:02:45.776 SYMLINK libspdk_log.so 00:02:45.776 CXX lib/trace_parser/trace.o 00:02:45.776 CC lib/util/base64.o 00:02:45.776 CC lib/util/bit_array.o 00:02:45.776 CC lib/util/cpuset.o 00:02:45.776 CC lib/util/crc16.o 00:02:45.776 CC lib/util/crc32.o 00:02:45.776 CC lib/util/crc32c.o 00:02:45.776 CC lib/util/crc32_ieee.o 00:02:45.776 CC lib/ioat/ioat.o 00:02:45.776 CC lib/util/crc64.o 00:02:45.776 CC lib/dma/dma.o 00:02:45.776 CC lib/util/dif.o 00:02:45.776 CC lib/util/fd.o 00:02:45.776 CC lib/util/file.o 00:02:45.776 CC lib/util/hexlify.o 00:02:45.776 CC lib/util/math.o 00:02:45.776 CC lib/util/iov.o 00:02:45.776 CC lib/util/pipe.o 00:02:45.776 CC lib/util/strerror_tls.o 00:02:45.776 CC lib/util/string.o 00:02:45.776 CC lib/util/fd_group.o 00:02:45.776 CC lib/util/uuid.o 00:02:45.776 CC lib/util/xor.o 00:02:45.776 CC lib/util/zipf.o 00:02:45.776 CC lib/vfio_user/host/vfio_user_pci.o 00:02:45.776 CC lib/vfio_user/host/vfio_user.o 00:02:45.776 LIB libspdk_dma.a 00:02:45.776 SO libspdk_dma.so.3.0 00:02:45.777 SYMLINK libspdk_dma.so 00:02:45.777 LIB libspdk_ioat.a 00:02:45.777 SO libspdk_ioat.so.6.0 00:02:45.777 LIB libspdk_vfio_user.a 00:02:45.777 SO libspdk_vfio_user.so.4.0 00:02:45.777 SYMLINK libspdk_ioat.so 00:02:45.777 LIB libspdk_util.a 00:02:45.777 SYMLINK libspdk_vfio_user.so 00:02:45.777 SO libspdk_util.so.8.0 00:02:45.777 LIB libspdk_trace_parser.a 00:02:45.777 SO libspdk_trace_parser.so.4.0 00:02:45.777 SYMLINK libspdk_util.so 00:02:45.777 SYMLINK libspdk_trace_parser.so 00:02:46.034 CC lib/idxd/idxd.o 00:02:46.034 CC lib/idxd/idxd_user.o 00:02:46.034 CC lib/idxd/idxd_kernel.o 00:02:46.034 CC lib/env_dpdk/env.o 00:02:46.034 CC lib/env_dpdk/memory.o 00:02:46.034 CC lib/env_dpdk/pci.o 00:02:46.034 CC lib/env_dpdk/init.o 00:02:46.034 CC lib/json/json_parse.o 00:02:46.034 CC lib/env_dpdk/threads.o 00:02:46.034 CC lib/env_dpdk/pci_ioat.o 00:02:46.034 CC lib/json/json_util.o 00:02:46.034 CC lib/env_dpdk/pci_virtio.o 00:02:46.034 CC lib/json/json_write.o 00:02:46.034 CC lib/env_dpdk/pci_vmd.o 00:02:46.034 CC lib/env_dpdk/pci_idxd.o 00:02:46.034 CC lib/env_dpdk/pci_event.o 00:02:46.034 CC lib/env_dpdk/sigbus_handler.o 00:02:46.034 CC lib/vmd/vmd.o 00:02:46.034 CC lib/env_dpdk/pci_dpdk.o 00:02:46.034 CC lib/conf/conf.o 00:02:46.034 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:46.034 CC lib/vmd/led.o 00:02:46.034 CC lib/rdma/common.o 00:02:46.034 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:46.034 CC lib/rdma/rdma_verbs.o 00:02:46.291 LIB libspdk_conf.a 00:02:46.291 SO libspdk_conf.so.5.0 00:02:46.291 LIB libspdk_json.a 00:02:46.291 LIB libspdk_rdma.a 00:02:46.291 SO libspdk_json.so.5.1 00:02:46.291 SO libspdk_rdma.so.5.0 00:02:46.291 SYMLINK libspdk_conf.so 00:02:46.291 SYMLINK libspdk_rdma.so 00:02:46.291 LIB libspdk_idxd.a 00:02:46.291 SYMLINK libspdk_json.so 00:02:46.548 SO libspdk_idxd.so.11.0 00:02:46.548 LIB libspdk_vmd.a 00:02:46.548 SYMLINK libspdk_idxd.so 00:02:46.548 SO libspdk_vmd.so.5.0 00:02:46.548 SYMLINK libspdk_vmd.so 00:02:46.548 CC lib/jsonrpc/jsonrpc_server.o 00:02:46.548 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:46.548 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:46.548 CC lib/jsonrpc/jsonrpc_client.o 00:02:46.805 LIB libspdk_jsonrpc.a 00:02:46.805 SO libspdk_jsonrpc.so.5.1 00:02:47.063 LIB libspdk_env_dpdk.a 00:02:47.063 SYMLINK libspdk_jsonrpc.so 00:02:47.063 SO libspdk_env_dpdk.so.13.0 00:02:47.063 SYMLINK libspdk_env_dpdk.so 00:02:47.320 CC lib/rpc/rpc.o 00:02:47.320 LIB libspdk_rpc.a 00:02:47.578 SO libspdk_rpc.so.5.0 00:02:47.578 SYMLINK libspdk_rpc.so 00:02:47.836 CC lib/sock/sock.o 00:02:47.836 CC lib/sock/sock_rpc.o 00:02:47.836 CC lib/trace/trace_flags.o 00:02:47.836 CC lib/notify/notify.o 00:02:47.836 CC lib/trace/trace.o 00:02:47.836 CC lib/notify/notify_rpc.o 00:02:47.836 CC lib/trace/trace_rpc.o 00:02:47.836 LIB libspdk_notify.a 00:02:48.096 SO libspdk_notify.so.5.0 00:02:48.096 LIB libspdk_trace.a 00:02:48.096 SO libspdk_trace.so.9.0 00:02:48.096 LIB libspdk_sock.a 00:02:48.096 SYMLINK libspdk_notify.so 00:02:48.096 SO libspdk_sock.so.8.0 00:02:48.096 SYMLINK libspdk_trace.so 00:02:48.096 SYMLINK libspdk_sock.so 00:02:48.355 CC lib/thread/iobuf.o 00:02:48.355 CC lib/thread/thread.o 00:02:48.355 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.355 CC lib/nvme/nvme_ctrlr.o 00:02:48.355 CC lib/nvme/nvme_fabric.o 00:02:48.355 CC lib/nvme/nvme_ns.o 00:02:48.355 CC lib/nvme/nvme_ns_cmd.o 00:02:48.355 CC lib/nvme/nvme_pcie_common.o 00:02:48.355 CC lib/nvme/nvme_qpair.o 00:02:48.355 CC lib/nvme/nvme_pcie.o 00:02:48.355 CC lib/nvme/nvme.o 00:02:48.355 CC lib/nvme/nvme_quirks.o 00:02:48.355 CC lib/nvme/nvme_transport.o 00:02:48.355 CC lib/nvme/nvme_discovery.o 00:02:48.355 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.355 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.355 CC lib/nvme/nvme_tcp.o 00:02:48.355 CC lib/nvme/nvme_opal.o 00:02:48.355 CC lib/nvme/nvme_io_msg.o 00:02:48.355 CC lib/nvme/nvme_poll_group.o 00:02:48.355 CC lib/nvme/nvme_zns.o 00:02:48.355 CC lib/nvme/nvme_cuse.o 00:02:48.355 CC lib/nvme/nvme_vfio_user.o 00:02:48.355 CC lib/nvme/nvme_rdma.o 00:02:49.290 LIB libspdk_thread.a 00:02:49.550 SO libspdk_thread.so.9.0 00:02:49.550 SYMLINK libspdk_thread.so 00:02:49.809 CC lib/virtio/virtio.o 00:02:49.809 CC lib/virtio/virtio_vhost_user.o 00:02:49.809 CC lib/virtio/virtio_vfio_user.o 00:02:49.809 CC lib/virtio/virtio_pci.o 00:02:49.809 CC lib/accel/accel.o 00:02:49.809 CC lib/accel/accel_rpc.o 00:02:49.809 CC lib/init/json_config.o 00:02:49.809 CC lib/init/subsystem_rpc.o 00:02:49.809 CC lib/blob/blobstore.o 00:02:49.809 CC lib/accel/accel_sw.o 00:02:49.809 CC lib/init/subsystem.o 00:02:49.809 CC lib/blob/zeroes.o 00:02:49.809 CC lib/init/rpc.o 00:02:49.809 CC lib/blob/request.o 00:02:49.809 CC lib/blob/blob_bs_dev.o 00:02:49.809 LIB libspdk_nvme.a 00:02:50.068 LIB libspdk_init.a 00:02:50.068 LIB libspdk_virtio.a 00:02:50.068 SO libspdk_nvme.so.12.0 00:02:50.068 SO libspdk_init.so.4.0 00:02:50.068 SO libspdk_virtio.so.6.0 00:02:50.068 SYMLINK libspdk_init.so 00:02:50.068 SYMLINK libspdk_virtio.so 00:02:50.327 SYMLINK libspdk_nvme.so 00:02:50.327 CC lib/event/app.o 00:02:50.327 CC lib/event/reactor.o 00:02:50.327 CC lib/event/log_rpc.o 00:02:50.327 CC lib/event/app_rpc.o 00:02:50.327 CC lib/event/scheduler_static.o 00:02:50.587 LIB libspdk_accel.a 00:02:50.587 SO libspdk_accel.so.14.0 00:02:50.587 LIB libspdk_event.a 00:02:50.587 SYMLINK libspdk_accel.so 00:02:50.587 SO libspdk_event.so.12.0 00:02:50.587 SYMLINK libspdk_event.so 00:02:50.846 CC lib/bdev/bdev.o 00:02:50.846 CC lib/bdev/part.o 00:02:50.846 CC lib/bdev/bdev_rpc.o 00:02:50.846 CC lib/bdev/bdev_zone.o 00:02:50.846 CC lib/bdev/scsi_nvme.o 00:02:51.783 LIB libspdk_blob.a 00:02:51.783 SO libspdk_blob.so.10.1 00:02:51.783 SYMLINK libspdk_blob.so 00:02:52.041 CC lib/lvol/lvol.o 00:02:52.041 CC lib/blobfs/blobfs.o 00:02:52.041 CC lib/blobfs/tree.o 00:02:52.609 LIB libspdk_bdev.a 00:02:52.609 LIB libspdk_lvol.a 00:02:52.609 LIB libspdk_blobfs.a 00:02:52.609 SO libspdk_lvol.so.9.1 00:02:52.609 SO libspdk_bdev.so.14.0 00:02:52.609 SO libspdk_blobfs.so.9.0 00:02:52.609 SYMLINK libspdk_lvol.so 00:02:52.609 SYMLINK libspdk_blobfs.so 00:02:52.609 SYMLINK libspdk_bdev.so 00:02:52.869 CC lib/ublk/ublk.o 00:02:52.869 CC lib/ublk/ublk_rpc.o 00:02:52.869 CC lib/nbd/nbd.o 00:02:52.869 CC lib/nbd/nbd_rpc.o 00:02:52.869 CC lib/scsi/dev.o 00:02:52.869 CC lib/scsi/lun.o 00:02:52.869 CC lib/scsi/port.o 00:02:52.869 CC lib/scsi/scsi_bdev.o 00:02:52.869 CC lib/scsi/scsi.o 00:02:52.869 CC lib/nvmf/ctrlr.o 00:02:52.869 CC lib/scsi/scsi_pr.o 00:02:52.869 CC lib/nvmf/ctrlr_discovery.o 00:02:52.869 CC lib/ftl/ftl_core.o 00:02:52.869 CC lib/scsi/scsi_rpc.o 00:02:52.869 CC lib/ftl/ftl_init.o 00:02:52.869 CC lib/nvmf/ctrlr_bdev.o 00:02:52.869 CC lib/nvmf/nvmf.o 00:02:52.869 CC lib/scsi/task.o 00:02:52.869 CC lib/ftl/ftl_layout.o 00:02:52.869 CC lib/nvmf/subsystem.o 00:02:52.869 CC lib/ftl/ftl_debug.o 00:02:52.869 CC lib/ftl/ftl_io.o 00:02:52.869 CC lib/nvmf/nvmf_rpc.o 00:02:52.869 CC lib/nvmf/transport.o 00:02:52.869 CC lib/ftl/ftl_sb.o 00:02:52.869 CC lib/ftl/ftl_l2p.o 00:02:52.869 CC lib/nvmf/tcp.o 00:02:52.869 CC lib/ftl/ftl_l2p_flat.o 00:02:52.869 CC lib/ftl/ftl_band.o 00:02:52.869 CC lib/nvmf/rdma.o 00:02:52.870 CC lib/ftl/ftl_nv_cache.o 00:02:52.870 CC lib/ftl/ftl_band_ops.o 00:02:52.870 CC lib/ftl/ftl_writer.o 00:02:52.870 CC lib/ftl/ftl_rq.o 00:02:52.870 CC lib/ftl/ftl_reloc.o 00:02:52.870 CC lib/ftl/ftl_l2p_cache.o 00:02:52.870 CC lib/ftl/ftl_p2l.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:52.870 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.870 CC lib/ftl/utils/ftl_md.o 00:02:52.870 CC lib/ftl/utils/ftl_conf.o 00:02:52.870 CC lib/ftl/utils/ftl_mempool.o 00:02:52.870 CC lib/ftl/utils/ftl_bitmap.o 00:02:52.870 CC lib/ftl/utils/ftl_property.o 00:02:52.870 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.870 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.128 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.128 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.128 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.128 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.128 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.128 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.128 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.128 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.128 CC lib/ftl/base/ftl_base_dev.o 00:02:53.128 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.128 CC lib/ftl/ftl_trace.o 00:02:53.386 LIB libspdk_nbd.a 00:02:53.386 SO libspdk_nbd.so.6.0 00:02:53.386 LIB libspdk_scsi.a 00:02:53.386 SYMLINK libspdk_nbd.so 00:02:53.646 SO libspdk_scsi.so.8.0 00:02:53.646 LIB libspdk_ublk.a 00:02:53.646 SO libspdk_ublk.so.2.0 00:02:53.646 SYMLINK libspdk_scsi.so 00:02:53.646 SYMLINK libspdk_ublk.so 00:02:53.905 LIB libspdk_ftl.a 00:02:53.905 CC lib/iscsi/conn.o 00:02:53.905 CC lib/iscsi/iscsi.o 00:02:53.905 CC lib/iscsi/init_grp.o 00:02:53.905 CC lib/iscsi/md5.o 00:02:53.905 CC lib/iscsi/portal_grp.o 00:02:53.905 CC lib/iscsi/param.o 00:02:53.905 CC lib/iscsi/tgt_node.o 00:02:53.905 CC lib/iscsi/iscsi_subsystem.o 00:02:53.905 CC lib/iscsi/iscsi_rpc.o 00:02:53.905 CC lib/iscsi/task.o 00:02:53.905 CC lib/vhost/vhost.o 00:02:53.905 CC lib/vhost/vhost_rpc.o 00:02:53.905 CC lib/vhost/vhost_scsi.o 00:02:53.905 CC lib/vhost/vhost_blk.o 00:02:53.905 CC lib/vhost/rte_vhost_user.o 00:02:53.905 SO libspdk_ftl.so.8.0 00:02:54.165 SYMLINK libspdk_ftl.so 00:02:54.771 LIB libspdk_nvmf.a 00:02:54.771 LIB libspdk_vhost.a 00:02:54.771 SO libspdk_nvmf.so.17.0 00:02:54.771 SO libspdk_vhost.so.7.1 00:02:54.771 SYMLINK libspdk_vhost.so 00:02:54.771 LIB libspdk_iscsi.a 00:02:54.771 SYMLINK libspdk_nvmf.so 00:02:54.771 SO libspdk_iscsi.so.7.0 00:02:55.030 SYMLINK libspdk_iscsi.so 00:02:55.289 CC module/env_dpdk/env_dpdk_rpc.o 00:02:55.548 CC module/sock/posix/posix.o 00:02:55.548 CC module/blob/bdev/blob_bdev.o 00:02:55.548 CC module/accel/ioat/accel_ioat.o 00:02:55.548 CC module/accel/ioat/accel_ioat_rpc.o 00:02:55.548 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.548 LIB libspdk_env_dpdk_rpc.a 00:02:55.548 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:55.548 CC module/accel/error/accel_error.o 00:02:55.548 CC module/accel/error/accel_error_rpc.o 00:02:55.548 CC module/accel/dsa/accel_dsa.o 00:02:55.548 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.548 CC module/accel/dsa/accel_dsa_rpc.o 00:02:55.548 CC module/accel/iaa/accel_iaa.o 00:02:55.548 CC module/accel/iaa/accel_iaa_rpc.o 00:02:55.548 SO libspdk_env_dpdk_rpc.so.5.0 00:02:55.548 SYMLINK libspdk_env_dpdk_rpc.so 00:02:55.548 LIB libspdk_scheduler_gscheduler.a 00:02:55.548 LIB libspdk_scheduler_dpdk_governor.a 00:02:55.548 SO libspdk_scheduler_gscheduler.so.3.0 00:02:55.548 LIB libspdk_accel_error.a 00:02:55.548 LIB libspdk_accel_ioat.a 00:02:55.807 LIB libspdk_scheduler_dynamic.a 00:02:55.807 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:55.807 LIB libspdk_accel_iaa.a 00:02:55.807 SO libspdk_accel_error.so.1.0 00:02:55.807 SO libspdk_accel_ioat.so.5.0 00:02:55.807 SYMLINK libspdk_scheduler_gscheduler.so 00:02:55.807 LIB libspdk_accel_dsa.a 00:02:55.807 SO libspdk_scheduler_dynamic.so.3.0 00:02:55.807 LIB libspdk_blob_bdev.a 00:02:55.808 SO libspdk_accel_iaa.so.2.0 00:02:55.808 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:55.808 SYMLINK libspdk_accel_error.so 00:02:55.808 SO libspdk_accel_dsa.so.4.0 00:02:55.808 SO libspdk_blob_bdev.so.10.1 00:02:55.808 SYMLINK libspdk_accel_ioat.so 00:02:55.808 SYMLINK libspdk_scheduler_dynamic.so 00:02:55.808 SYMLINK libspdk_accel_iaa.so 00:02:55.808 SYMLINK libspdk_accel_dsa.so 00:02:55.808 SYMLINK libspdk_blob_bdev.so 00:02:56.066 LIB libspdk_sock_posix.a 00:02:56.066 SO libspdk_sock_posix.so.5.0 00:02:56.066 SYMLINK libspdk_sock_posix.so 00:02:56.066 CC module/bdev/error/vbdev_error.o 00:02:56.066 CC module/bdev/error/vbdev_error_rpc.o 00:02:56.066 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.066 CC module/bdev/raid/bdev_raid_rpc.o 00:02:56.066 CC module/bdev/raid/bdev_raid.o 00:02:56.066 CC module/bdev/raid/raid0.o 00:02:56.066 CC module/bdev/raid/raid1.o 00:02:56.066 CC module/bdev/raid/concat.o 00:02:56.066 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:56.066 CC module/bdev/passthru/vbdev_passthru.o 00:02:56.066 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:56.066 CC module/bdev/malloc/bdev_malloc.o 00:02:56.066 CC module/bdev/delay/vbdev_delay.o 00:02:56.066 CC module/bdev/gpt/gpt.o 00:02:56.066 CC module/bdev/gpt/vbdev_gpt.o 00:02:56.066 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:56.066 CC module/bdev/null/bdev_null.o 00:02:56.066 CC module/bdev/ftl/bdev_ftl.o 00:02:56.066 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:56.066 CC module/bdev/null/bdev_null_rpc.o 00:02:56.066 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:56.066 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.066 CC module/bdev/lvol/vbdev_lvol.o 00:02:56.325 CC module/bdev/iscsi/bdev_iscsi.o 00:02:56.325 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:56.325 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:56.325 CC module/bdev/split/vbdev_split.o 00:02:56.325 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.325 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:56.325 CC module/blobfs/bdev/blobfs_bdev.o 00:02:56.325 CC module/bdev/aio/bdev_aio.o 00:02:56.325 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:56.325 CC module/bdev/nvme/bdev_nvme.o 00:02:56.325 CC module/bdev/aio/bdev_aio_rpc.o 00:02:56.325 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:56.325 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:56.325 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:56.325 CC module/bdev/nvme/nvme_rpc.o 00:02:56.325 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.325 CC module/bdev/nvme/vbdev_opal.o 00:02:56.325 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.325 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:56.325 LIB libspdk_blobfs_bdev.a 00:02:56.325 LIB libspdk_bdev_error.a 00:02:56.325 LIB libspdk_bdev_split.a 00:02:56.325 SO libspdk_blobfs_bdev.so.5.0 00:02:56.585 LIB libspdk_bdev_null.a 00:02:56.585 LIB libspdk_bdev_gpt.a 00:02:56.585 LIB libspdk_bdev_ftl.a 00:02:56.585 SO libspdk_bdev_split.so.5.0 00:02:56.585 SO libspdk_bdev_error.so.5.0 00:02:56.585 LIB libspdk_bdev_passthru.a 00:02:56.585 SO libspdk_bdev_null.so.5.0 00:02:56.585 SYMLINK libspdk_blobfs_bdev.so 00:02:56.585 SO libspdk_bdev_gpt.so.5.0 00:02:56.585 SO libspdk_bdev_ftl.so.5.0 00:02:56.585 LIB libspdk_bdev_zone_block.a 00:02:56.585 SO libspdk_bdev_passthru.so.5.0 00:02:56.585 SYMLINK libspdk_bdev_error.so 00:02:56.585 LIB libspdk_bdev_delay.a 00:02:56.585 LIB libspdk_bdev_aio.a 00:02:56.585 LIB libspdk_bdev_malloc.a 00:02:56.585 SYMLINK libspdk_bdev_null.so 00:02:56.585 SYMLINK libspdk_bdev_split.so 00:02:56.585 SO libspdk_bdev_zone_block.so.5.0 00:02:56.585 LIB libspdk_bdev_iscsi.a 00:02:56.586 SYMLINK libspdk_bdev_gpt.so 00:02:56.586 SO libspdk_bdev_aio.so.5.0 00:02:56.586 SO libspdk_bdev_delay.so.5.0 00:02:56.586 SYMLINK libspdk_bdev_ftl.so 00:02:56.586 SO libspdk_bdev_malloc.so.5.0 00:02:56.586 SYMLINK libspdk_bdev_passthru.so 00:02:56.586 SO libspdk_bdev_iscsi.so.5.0 00:02:56.586 SYMLINK libspdk_bdev_zone_block.so 00:02:56.586 LIB libspdk_bdev_lvol.a 00:02:56.586 SYMLINK libspdk_bdev_aio.so 00:02:56.586 SYMLINK libspdk_bdev_malloc.so 00:02:56.586 SYMLINK libspdk_bdev_delay.so 00:02:56.586 SYMLINK libspdk_bdev_iscsi.so 00:02:56.586 LIB libspdk_bdev_virtio.a 00:02:56.586 SO libspdk_bdev_lvol.so.5.0 00:02:56.845 SO libspdk_bdev_virtio.so.5.0 00:02:56.845 SYMLINK libspdk_bdev_lvol.so 00:02:56.845 SYMLINK libspdk_bdev_virtio.so 00:02:56.845 LIB libspdk_bdev_raid.a 00:02:56.845 SO libspdk_bdev_raid.so.5.0 00:02:57.104 SYMLINK libspdk_bdev_raid.so 00:02:57.673 LIB libspdk_bdev_nvme.a 00:02:57.673 SO libspdk_bdev_nvme.so.6.0 00:02:57.932 SYMLINK libspdk_bdev_nvme.so 00:02:58.500 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.500 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.500 CC module/event/subsystems/vmd/vmd.o 00:02:58.500 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.500 CC module/event/subsystems/sock/sock.o 00:02:58.500 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.500 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.500 LIB libspdk_event_iobuf.a 00:02:58.500 LIB libspdk_event_sock.a 00:02:58.500 LIB libspdk_event_vmd.a 00:02:58.500 LIB libspdk_event_vhost_blk.a 00:02:58.500 LIB libspdk_event_scheduler.a 00:02:58.500 SO libspdk_event_iobuf.so.2.0 00:02:58.500 SO libspdk_event_sock.so.4.0 00:02:58.500 SO libspdk_event_vmd.so.5.0 00:02:58.500 SO libspdk_event_vhost_blk.so.2.0 00:02:58.500 SO libspdk_event_scheduler.so.3.0 00:02:58.500 SYMLINK libspdk_event_iobuf.so 00:02:58.500 SYMLINK libspdk_event_vhost_blk.so 00:02:58.500 SYMLINK libspdk_event_sock.so 00:02:58.500 SYMLINK libspdk_event_scheduler.so 00:02:58.500 SYMLINK libspdk_event_vmd.so 00:02:58.759 CC module/event/subsystems/accel/accel.o 00:02:59.017 LIB libspdk_event_accel.a 00:02:59.017 SO libspdk_event_accel.so.5.0 00:02:59.017 SYMLINK libspdk_event_accel.so 00:02:59.276 CC module/event/subsystems/bdev/bdev.o 00:02:59.534 LIB libspdk_event_bdev.a 00:02:59.534 SO libspdk_event_bdev.so.5.0 00:02:59.534 SYMLINK libspdk_event_bdev.so 00:02:59.793 CC module/event/subsystems/scsi/scsi.o 00:02:59.793 CC module/event/subsystems/ublk/ublk.o 00:02:59.793 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.793 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.793 CC module/event/subsystems/nbd/nbd.o 00:03:00.052 LIB libspdk_event_ublk.a 00:03:00.052 LIB libspdk_event_scsi.a 00:03:00.052 LIB libspdk_event_nbd.a 00:03:00.052 SO libspdk_event_scsi.so.5.0 00:03:00.052 SO libspdk_event_ublk.so.2.0 00:03:00.052 SO libspdk_event_nbd.so.5.0 00:03:00.052 LIB libspdk_event_nvmf.a 00:03:00.052 SYMLINK libspdk_event_scsi.so 00:03:00.052 SO libspdk_event_nvmf.so.5.0 00:03:00.052 SYMLINK libspdk_event_ublk.so 00:03:00.052 SYMLINK libspdk_event_nbd.so 00:03:00.052 SYMLINK libspdk_event_nvmf.so 00:03:00.311 CC module/event/subsystems/iscsi/iscsi.o 00:03:00.311 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:00.570 LIB libspdk_event_vhost_scsi.a 00:03:00.570 LIB libspdk_event_iscsi.a 00:03:00.570 SO libspdk_event_vhost_scsi.so.2.0 00:03:00.570 SO libspdk_event_iscsi.so.5.0 00:03:00.570 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.570 SYMLINK libspdk_event_iscsi.so 00:03:00.828 SO libspdk.so.5.0 00:03:00.828 SYMLINK libspdk.so 00:03:01.091 TEST_HEADER include/spdk/accel.h 00:03:01.091 TEST_HEADER include/spdk/accel_module.h 00:03:01.091 TEST_HEADER include/spdk/assert.h 00:03:01.091 TEST_HEADER include/spdk/barrier.h 00:03:01.091 TEST_HEADER include/spdk/base64.h 00:03:01.091 TEST_HEADER include/spdk/bdev_module.h 00:03:01.091 TEST_HEADER include/spdk/bdev.h 00:03:01.091 TEST_HEADER include/spdk/bdev_zone.h 00:03:01.091 CC app/spdk_nvme_perf/perf.o 00:03:01.091 TEST_HEADER include/spdk/bit_array.h 00:03:01.091 TEST_HEADER include/spdk/blob_bdev.h 00:03:01.091 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:01.091 TEST_HEADER include/spdk/bit_pool.h 00:03:01.091 CC app/trace_record/trace_record.o 00:03:01.091 TEST_HEADER include/spdk/blob.h 00:03:01.091 TEST_HEADER include/spdk/blobfs.h 00:03:01.091 TEST_HEADER include/spdk/conf.h 00:03:01.091 TEST_HEADER include/spdk/config.h 00:03:01.091 CC test/rpc_client/rpc_client_test.o 00:03:01.091 TEST_HEADER include/spdk/cpuset.h 00:03:01.091 TEST_HEADER include/spdk/crc16.h 00:03:01.091 TEST_HEADER include/spdk/crc32.h 00:03:01.091 CC app/spdk_top/spdk_top.o 00:03:01.091 TEST_HEADER include/spdk/dif.h 00:03:01.091 TEST_HEADER include/spdk/crc64.h 00:03:01.091 CC app/spdk_nvme_identify/identify.o 00:03:01.091 CC app/spdk_lspci/spdk_lspci.o 00:03:01.091 TEST_HEADER include/spdk/dma.h 00:03:01.091 CXX app/trace/trace.o 00:03:01.091 TEST_HEADER include/spdk/endian.h 00:03:01.091 TEST_HEADER include/spdk/env.h 00:03:01.091 CC app/spdk_nvme_discover/discovery_aer.o 00:03:01.091 TEST_HEADER include/spdk/env_dpdk.h 00:03:01.091 TEST_HEADER include/spdk/event.h 00:03:01.091 TEST_HEADER include/spdk/fd_group.h 00:03:01.091 TEST_HEADER include/spdk/fd.h 00:03:01.091 TEST_HEADER include/spdk/file.h 00:03:01.091 TEST_HEADER include/spdk/ftl.h 00:03:01.091 TEST_HEADER include/spdk/gpt_spec.h 00:03:01.091 TEST_HEADER include/spdk/hexlify.h 00:03:01.091 TEST_HEADER include/spdk/histogram_data.h 00:03:01.091 TEST_HEADER include/spdk/idxd.h 00:03:01.091 TEST_HEADER include/spdk/idxd_spec.h 00:03:01.091 TEST_HEADER include/spdk/ioat.h 00:03:01.091 TEST_HEADER include/spdk/init.h 00:03:01.091 TEST_HEADER include/spdk/ioat_spec.h 00:03:01.091 TEST_HEADER include/spdk/iscsi_spec.h 00:03:01.091 TEST_HEADER include/spdk/json.h 00:03:01.091 TEST_HEADER include/spdk/likely.h 00:03:01.091 TEST_HEADER include/spdk/jsonrpc.h 00:03:01.091 TEST_HEADER include/spdk/log.h 00:03:01.091 TEST_HEADER include/spdk/lvol.h 00:03:01.091 TEST_HEADER include/spdk/memory.h 00:03:01.091 TEST_HEADER include/spdk/mmio.h 00:03:01.091 TEST_HEADER include/spdk/notify.h 00:03:01.091 TEST_HEADER include/spdk/nbd.h 00:03:01.091 TEST_HEADER include/spdk/nvme.h 00:03:01.091 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:01.091 TEST_HEADER include/spdk/nvme_intel.h 00:03:01.091 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:01.091 TEST_HEADER include/spdk/nvme_zns.h 00:03:01.091 TEST_HEADER include/spdk/nvme_spec.h 00:03:01.091 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:01.091 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:01.091 CC app/iscsi_tgt/iscsi_tgt.o 00:03:01.091 TEST_HEADER include/spdk/nvmf.h 00:03:01.091 TEST_HEADER include/spdk/nvmf_transport.h 00:03:01.091 CC app/spdk_dd/spdk_dd.o 00:03:01.092 TEST_HEADER include/spdk/nvmf_spec.h 00:03:01.092 TEST_HEADER include/spdk/opal.h 00:03:01.092 TEST_HEADER include/spdk/opal_spec.h 00:03:01.092 TEST_HEADER include/spdk/pci_ids.h 00:03:01.092 TEST_HEADER include/spdk/queue.h 00:03:01.092 TEST_HEADER include/spdk/pipe.h 00:03:01.092 TEST_HEADER include/spdk/reduce.h 00:03:01.092 TEST_HEADER include/spdk/scheduler.h 00:03:01.092 TEST_HEADER include/spdk/rpc.h 00:03:01.092 TEST_HEADER include/spdk/scsi_spec.h 00:03:01.092 TEST_HEADER include/spdk/scsi.h 00:03:01.092 TEST_HEADER include/spdk/sock.h 00:03:01.092 TEST_HEADER include/spdk/stdinc.h 00:03:01.092 TEST_HEADER include/spdk/string.h 00:03:01.092 TEST_HEADER include/spdk/thread.h 00:03:01.092 TEST_HEADER include/spdk/trace.h 00:03:01.092 CC app/nvmf_tgt/nvmf_main.o 00:03:01.092 TEST_HEADER include/spdk/trace_parser.h 00:03:01.092 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:01.092 TEST_HEADER include/spdk/tree.h 00:03:01.092 TEST_HEADER include/spdk/ublk.h 00:03:01.092 TEST_HEADER include/spdk/util.h 00:03:01.092 TEST_HEADER include/spdk/uuid.h 00:03:01.092 TEST_HEADER include/spdk/version.h 00:03:01.092 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:01.092 TEST_HEADER include/spdk/vhost.h 00:03:01.092 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:01.092 CC app/vhost/vhost.o 00:03:01.092 CC app/spdk_tgt/spdk_tgt.o 00:03:01.092 TEST_HEADER include/spdk/vmd.h 00:03:01.092 TEST_HEADER include/spdk/zipf.h 00:03:01.092 TEST_HEADER include/spdk/xor.h 00:03:01.092 CXX test/cpp_headers/accel_module.o 00:03:01.092 CXX test/cpp_headers/accel.o 00:03:01.092 CXX test/cpp_headers/assert.o 00:03:01.092 CXX test/cpp_headers/barrier.o 00:03:01.092 CXX test/cpp_headers/base64.o 00:03:01.092 CXX test/cpp_headers/bdev.o 00:03:01.092 CXX test/cpp_headers/bdev_module.o 00:03:01.092 CXX test/cpp_headers/bdev_zone.o 00:03:01.092 CXX test/cpp_headers/bit_array.o 00:03:01.092 CXX test/cpp_headers/bit_pool.o 00:03:01.092 CXX test/cpp_headers/blobfs_bdev.o 00:03:01.092 CXX test/cpp_headers/blob_bdev.o 00:03:01.092 CXX test/cpp_headers/blobfs.o 00:03:01.092 CXX test/cpp_headers/blob.o 00:03:01.092 CXX test/cpp_headers/conf.o 00:03:01.092 CXX test/cpp_headers/config.o 00:03:01.092 CXX test/cpp_headers/cpuset.o 00:03:01.092 CXX test/cpp_headers/crc16.o 00:03:01.092 CXX test/cpp_headers/crc32.o 00:03:01.092 CXX test/cpp_headers/crc64.o 00:03:01.092 CXX test/cpp_headers/dif.o 00:03:01.092 CXX test/cpp_headers/dma.o 00:03:01.092 CXX test/cpp_headers/endian.o 00:03:01.092 CXX test/cpp_headers/env_dpdk.o 00:03:01.092 CXX test/cpp_headers/event.o 00:03:01.092 CXX test/cpp_headers/env.o 00:03:01.092 CXX test/cpp_headers/fd_group.o 00:03:01.092 CXX test/cpp_headers/fd.o 00:03:01.092 CXX test/cpp_headers/file.o 00:03:01.092 CXX test/cpp_headers/ftl.o 00:03:01.092 CXX test/cpp_headers/gpt_spec.o 00:03:01.092 CXX test/cpp_headers/hexlify.o 00:03:01.092 CC test/event/event_perf/event_perf.o 00:03:01.092 CXX test/cpp_headers/histogram_data.o 00:03:01.092 CXX test/cpp_headers/idxd.o 00:03:01.092 CC test/nvme/aer/aer.o 00:03:01.092 CXX test/cpp_headers/idxd_spec.o 00:03:01.092 CXX test/cpp_headers/init.o 00:03:01.092 CC test/thread/poller_perf/poller_perf.o 00:03:01.092 CXX test/cpp_headers/ioat.o 00:03:01.092 CC test/nvme/sgl/sgl.o 00:03:01.092 CC test/nvme/reset/reset.o 00:03:01.092 CC test/nvme/connect_stress/connect_stress.o 00:03:01.092 CC test/nvme/overhead/overhead.o 00:03:01.092 CC test/nvme/err_injection/err_injection.o 00:03:01.092 CC test/env/memory/memory_ut.o 00:03:01.092 CC test/event/reactor_perf/reactor_perf.o 00:03:01.092 CC test/event/reactor/reactor.o 00:03:01.092 CC examples/nvme/hello_world/hello_world.o 00:03:01.092 CC examples/accel/perf/accel_perf.o 00:03:01.092 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:01.092 CC examples/nvme/arbitration/arbitration.o 00:03:01.092 CC test/nvme/startup/startup.o 00:03:01.092 CC test/nvme/e2edp/nvme_dp.o 00:03:01.092 CC test/nvme/fused_ordering/fused_ordering.o 00:03:01.092 CC test/event/app_repeat/app_repeat.o 00:03:01.092 CC test/nvme/compliance/nvme_compliance.o 00:03:01.092 CC examples/ioat/verify/verify.o 00:03:01.092 CC examples/nvme/reconnect/reconnect.o 00:03:01.092 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:01.092 CC examples/nvme/abort/abort.o 00:03:01.092 CC test/nvme/reserve/reserve.o 00:03:01.092 CC test/app/histogram_perf/histogram_perf.o 00:03:01.092 CC test/nvme/fdp/fdp.o 00:03:01.092 CC test/app/jsoncat/jsoncat.o 00:03:01.092 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.092 CC test/env/vtophys/vtophys.o 00:03:01.092 CC test/env/pci/pci_ut.o 00:03:01.092 CC examples/ioat/perf/perf.o 00:03:01.092 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.092 CC examples/nvme/hotplug/hotplug.o 00:03:01.092 CC test/nvme/simple_copy/simple_copy.o 00:03:01.092 CC test/nvme/boot_partition/boot_partition.o 00:03:01.092 CC test/blobfs/mkfs/mkfs.o 00:03:01.092 CC examples/idxd/perf/perf.o 00:03:01.092 CC test/nvme/cuse/cuse.o 00:03:01.092 CC examples/util/zipf/zipf.o 00:03:01.092 CC test/app/stub/stub.o 00:03:01.092 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:01.092 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:01.092 CC examples/vmd/led/led.o 00:03:01.092 CC app/fio/nvme/fio_plugin.o 00:03:01.366 CC examples/sock/hello_world/hello_sock.o 00:03:01.366 CC test/accel/dif/dif.o 00:03:01.366 CXX test/cpp_headers/ioat_spec.o 00:03:01.366 CC test/bdev/bdevio/bdevio.o 00:03:01.366 CC test/app/bdev_svc/bdev_svc.o 00:03:01.366 CC examples/nvmf/nvmf/nvmf.o 00:03:01.366 CC test/dma/test_dma/test_dma.o 00:03:01.366 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.366 CC test/event/scheduler/scheduler.o 00:03:01.366 CC app/fio/bdev/fio_plugin.o 00:03:01.366 CC examples/thread/thread/thread_ex.o 00:03:01.366 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.366 CC examples/blob/cli/blobcli.o 00:03:01.366 CC examples/blob/hello_world/hello_blob.o 00:03:01.366 CC test/env/mem_callbacks/mem_callbacks.o 00:03:01.366 CC test/lvol/esnap/esnap.o 00:03:01.366 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:01.629 LINK spdk_lspci 00:03:01.629 LINK rpc_client_test 00:03:01.629 LINK interrupt_tgt 00:03:01.629 LINK spdk_nvme_discover 00:03:01.629 LINK vhost 00:03:01.629 LINK reactor 00:03:01.629 LINK nvmf_tgt 00:03:01.629 LINK iscsi_tgt 00:03:01.629 LINK reactor_perf 00:03:01.629 LINK lsvmd 00:03:01.629 LINK jsoncat 00:03:01.629 LINK histogram_perf 00:03:01.629 LINK event_perf 00:03:01.629 LINK connect_stress 00:03:01.629 LINK poller_perf 00:03:01.895 LINK env_dpdk_post_init 00:03:01.895 LINK app_repeat 00:03:01.895 LINK spdk_trace_record 00:03:01.895 LINK boot_partition 00:03:01.895 LINK fused_ordering 00:03:01.895 LINK pmr_persistence 00:03:01.895 LINK zipf 00:03:01.895 LINK vtophys 00:03:01.895 LINK spdk_tgt 00:03:01.895 LINK led 00:03:01.895 LINK reserve 00:03:01.895 LINK stub 00:03:01.895 LINK doorbell_aers 00:03:01.895 LINK startup 00:03:01.895 CXX test/cpp_headers/iscsi_spec.o 00:03:01.895 LINK mkfs 00:03:01.895 LINK cmb_copy 00:03:01.895 LINK verify 00:03:01.895 LINK err_injection 00:03:01.895 CXX test/cpp_headers/json.o 00:03:01.895 CXX test/cpp_headers/jsonrpc.o 00:03:01.895 CXX test/cpp_headers/likely.o 00:03:01.895 LINK hello_world 00:03:01.895 CXX test/cpp_headers/log.o 00:03:01.895 CXX test/cpp_headers/lvol.o 00:03:01.895 CXX test/cpp_headers/memory.o 00:03:01.895 LINK simple_copy 00:03:01.895 LINK bdev_svc 00:03:01.895 CXX test/cpp_headers/mmio.o 00:03:01.895 CXX test/cpp_headers/nbd.o 00:03:01.895 CXX test/cpp_headers/notify.o 00:03:01.895 CXX test/cpp_headers/nvme.o 00:03:01.895 CXX test/cpp_headers/nvme_intel.o 00:03:01.895 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.895 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.895 LINK ioat_perf 00:03:01.895 CXX test/cpp_headers/nvme_spec.o 00:03:01.895 CXX test/cpp_headers/nvme_zns.o 00:03:01.895 CXX test/cpp_headers/nvmf_cmd.o 00:03:01.895 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:01.895 CXX test/cpp_headers/nvmf.o 00:03:01.895 CXX test/cpp_headers/nvmf_spec.o 00:03:01.895 CXX test/cpp_headers/nvmf_transport.o 00:03:01.895 CXX test/cpp_headers/opal.o 00:03:01.895 CXX test/cpp_headers/opal_spec.o 00:03:01.895 CXX test/cpp_headers/pci_ids.o 00:03:01.895 LINK reset 00:03:01.895 LINK overhead 00:03:01.895 CXX test/cpp_headers/pipe.o 00:03:01.895 CXX test/cpp_headers/queue.o 00:03:01.895 CXX test/cpp_headers/reduce.o 00:03:01.895 LINK scheduler 00:03:01.895 LINK hello_sock 00:03:01.895 CXX test/cpp_headers/rpc.o 00:03:01.895 LINK hotplug 00:03:01.895 LINK aer 00:03:01.895 CXX test/cpp_headers/scheduler.o 00:03:01.895 CXX test/cpp_headers/scsi.o 00:03:01.895 CXX test/cpp_headers/scsi_spec.o 00:03:01.895 CXX test/cpp_headers/sock.o 00:03:01.895 LINK nvme_dp 00:03:01.895 LINK thread 00:03:01.895 CXX test/cpp_headers/stdinc.o 00:03:01.895 LINK hello_bdev 00:03:01.895 LINK sgl 00:03:01.895 CXX test/cpp_headers/string.o 00:03:01.895 LINK mem_callbacks 00:03:01.895 LINK arbitration 00:03:01.896 CXX test/cpp_headers/thread.o 00:03:01.896 LINK spdk_dd 00:03:01.896 LINK hello_blob 00:03:01.896 CXX test/cpp_headers/trace.o 00:03:01.896 CXX test/cpp_headers/trace_parser.o 00:03:02.158 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.158 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.158 LINK nvmf 00:03:02.158 LINK fdp 00:03:02.158 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.158 CXX test/cpp_headers/tree.o 00:03:02.158 LINK nvme_compliance 00:03:02.158 LINK abort 00:03:02.158 CXX test/cpp_headers/ublk.o 00:03:02.158 CXX test/cpp_headers/util.o 00:03:02.158 CXX test/cpp_headers/version.o 00:03:02.158 CXX test/cpp_headers/uuid.o 00:03:02.158 LINK idxd_perf 00:03:02.158 CXX test/cpp_headers/vfio_user_pci.o 00:03:02.158 LINK bdevio 00:03:02.158 LINK dif 00:03:02.158 CXX test/cpp_headers/vfio_user_spec.o 00:03:02.158 LINK spdk_trace 00:03:02.158 CXX test/cpp_headers/vmd.o 00:03:02.158 CXX test/cpp_headers/vhost.o 00:03:02.158 CXX test/cpp_headers/xor.o 00:03:02.158 LINK pci_ut 00:03:02.158 LINK reconnect 00:03:02.158 CXX test/cpp_headers/zipf.o 00:03:02.158 LINK accel_perf 00:03:02.158 LINK test_dma 00:03:02.158 LINK nvme_manage 00:03:02.415 LINK spdk_nvme 00:03:02.415 LINK memory_ut 00:03:02.415 LINK blobcli 00:03:02.415 LINK nvme_fuzz 00:03:02.415 LINK spdk_bdev 00:03:02.672 LINK spdk_nvme_perf 00:03:02.672 LINK spdk_nvme_identify 00:03:02.672 LINK spdk_top 00:03:02.672 LINK bdevperf 00:03:02.672 LINK vhost_fuzz 00:03:02.672 LINK cuse 00:03:03.239 LINK iscsi_fuzz 00:03:05.140 LINK esnap 00:03:05.399 00:03:05.399 real 0m30.899s 00:03:05.399 user 4m50.358s 00:03:05.399 sys 2m45.890s 00:03:05.399 21:06:40 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:05.399 21:06:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.399 ************************************ 00:03:05.399 END TEST make 00:03:05.399 ************************************ 00:03:05.399 21:06:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.399 21:06:40 -- nvmf/common.sh@7 -- # uname -s 00:03:05.399 21:06:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.399 21:06:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.399 21:06:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.399 21:06:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.399 21:06:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.399 21:06:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.399 21:06:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.399 21:06:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.399 21:06:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.399 21:06:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.399 21:06:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:05.399 21:06:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:05.399 21:06:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.399 21:06:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.399 21:06:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:05.399 21:06:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:05.399 21:06:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.399 21:06:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.399 21:06:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.399 21:06:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.399 21:06:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.399 21:06:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.399 21:06:40 -- paths/export.sh@5 -- # export PATH 00:03:05.399 21:06:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.399 21:06:40 -- nvmf/common.sh@46 -- # : 0 00:03:05.399 21:06:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:05.399 21:06:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:05.399 21:06:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:05.399 21:06:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.399 21:06:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.399 21:06:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:05.399 21:06:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:05.399 21:06:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:05.399 21:06:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.399 21:06:40 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.399 21:06:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.399 21:06:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.399 21:06:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:05.399 21:06:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.399 21:06:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:05.399 21:06:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.399 21:06:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.658 21:06:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.658 21:06:40 -- spdk/autotest.sh@48 -- # udevadm_pid=1439968 00:03:05.658 21:06:40 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:05.658 21:06:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.658 21:06:40 -- spdk/autotest.sh@54 -- # echo 1439970 00:03:05.658 21:06:40 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:05.658 21:06:40 -- spdk/autotest.sh@56 -- # echo 1439971 00:03:05.658 21:06:40 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:05.658 21:06:40 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:05.658 21:06:40 -- spdk/autotest.sh@60 -- # echo 1439972 00:03:05.658 21:06:40 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:05.658 21:06:40 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:05.658 21:06:40 -- spdk/autotest.sh@62 -- # echo 1439974 00:03:05.658 21:06:40 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:05.658 21:06:40 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:05.658 21:06:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:05.658 21:06:40 -- common/autotest_common.sh@10 -- # set +x 00:03:05.658 21:06:40 -- spdk/autotest.sh@70 -- # create_test_list 00:03:05.658 21:06:40 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:05.658 21:06:40 -- common/autotest_common.sh@10 -- # set +x 00:03:05.658 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:05.658 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:05.658 21:06:40 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:05.658 21:06:40 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:05.658 21:06:40 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:05.658 21:06:40 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:05.658 21:06:40 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:05.658 21:06:40 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:05.658 21:06:40 -- common/autotest_common.sh@1440 -- # uname 00:03:05.658 21:06:40 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:05.658 21:06:40 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:05.658 21:06:40 -- common/autotest_common.sh@1460 -- # uname 00:03:05.658 21:06:40 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:05.658 21:06:40 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:05.658 21:06:40 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:05.658 21:06:40 -- spdk/autotest.sh@83 -- # hash lcov 00:03:05.658 21:06:40 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:05.658 21:06:40 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:05.658 --rc lcov_branch_coverage=1 00:03:05.658 --rc lcov_function_coverage=1 00:03:05.658 --rc genhtml_branch_coverage=1 00:03:05.658 --rc genhtml_function_coverage=1 00:03:05.658 --rc genhtml_legend=1 00:03:05.658 --rc geninfo_all_blocks=1 00:03:05.658 ' 00:03:05.658 21:06:40 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:05.658 --rc lcov_branch_coverage=1 00:03:05.658 --rc lcov_function_coverage=1 00:03:05.658 --rc genhtml_branch_coverage=1 00:03:05.658 --rc genhtml_function_coverage=1 00:03:05.658 --rc genhtml_legend=1 00:03:05.658 --rc geninfo_all_blocks=1 00:03:05.658 ' 00:03:05.658 21:06:40 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:05.658 --rc lcov_branch_coverage=1 00:03:05.658 --rc lcov_function_coverage=1 00:03:05.658 --rc genhtml_branch_coverage=1 00:03:05.658 --rc genhtml_function_coverage=1 00:03:05.658 --rc genhtml_legend=1 00:03:05.658 --rc geninfo_all_blocks=1 00:03:05.658 --no-external' 00:03:05.658 21:06:40 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:05.658 --rc lcov_branch_coverage=1 00:03:05.658 --rc lcov_function_coverage=1 00:03:05.658 --rc genhtml_branch_coverage=1 00:03:05.658 --rc genhtml_function_coverage=1 00:03:05.658 --rc genhtml_legend=1 00:03:05.658 --rc geninfo_all_blocks=1 00:03:05.658 --no-external' 00:03:05.658 21:06:40 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:05.658 lcov: LCOV version 1.14 00:03:05.658 21:06:40 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:08.185 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:08.185 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:08.185 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:08.185 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:08.185 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:08.185 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:26.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:26.253 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:26.254 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:26.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:28.780 21:07:03 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:28.780 21:07:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:28.780 21:07:03 -- common/autotest_common.sh@10 -- # set +x 00:03:28.780 21:07:03 -- spdk/autotest.sh@102 -- # rm -f 00:03:28.780 21:07:03 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.963 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:32.963 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:33.222 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:33.222 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:33.222 21:07:07 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:33.222 21:07:07 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:33.222 21:07:07 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:33.222 21:07:07 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:33.222 21:07:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:33.222 21:07:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:33.222 21:07:07 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:33.222 21:07:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.222 21:07:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:33.222 21:07:07 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:33.222 21:07:07 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:33.222 21:07:07 -- spdk/autotest.sh@121 -- # grep -v p 00:03:33.222 21:07:07 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:33.222 21:07:07 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:33.222 21:07:07 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:33.222 21:07:07 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:33.222 21:07:07 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:33.222 No valid GPT data, bailing 00:03:33.222 21:07:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:33.222 21:07:07 -- scripts/common.sh@393 -- # pt= 00:03:33.222 21:07:07 -- scripts/common.sh@394 -- # return 1 00:03:33.222 21:07:07 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:33.222 1+0 records in 00:03:33.222 1+0 records out 00:03:33.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065379 s, 160 MB/s 00:03:33.222 21:07:07 -- spdk/autotest.sh@129 -- # sync 00:03:33.222 21:07:07 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:33.222 21:07:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:33.222 21:07:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.792 21:07:13 -- spdk/autotest.sh@135 -- # uname -s 00:03:39.792 21:07:13 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:39.792 21:07:13 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.792 21:07:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.792 21:07:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.792 21:07:13 -- common/autotest_common.sh@10 -- # set +x 00:03:39.792 ************************************ 00:03:39.792 START TEST setup.sh 00:03:39.792 ************************************ 00:03:39.792 21:07:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.792 * Looking for test storage... 00:03:39.792 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:39.792 21:07:14 -- setup/test-setup.sh@10 -- # uname -s 00:03:39.792 21:07:14 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:39.792 21:07:14 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:39.792 21:07:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.792 21:07:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.792 21:07:14 -- common/autotest_common.sh@10 -- # set +x 00:03:39.792 ************************************ 00:03:39.792 START TEST acl 00:03:39.792 ************************************ 00:03:39.792 21:07:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:39.792 * Looking for test storage... 00:03:39.792 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:39.792 21:07:14 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:39.792 21:07:14 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:39.792 21:07:14 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:39.792 21:07:14 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:39.792 21:07:14 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:39.792 21:07:14 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:39.792 21:07:14 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:39.792 21:07:14 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.792 21:07:14 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:39.792 21:07:14 -- setup/acl.sh@12 -- # devs=() 00:03:39.792 21:07:14 -- setup/acl.sh@12 -- # declare -a devs 00:03:39.792 21:07:14 -- setup/acl.sh@13 -- # drivers=() 00:03:39.792 21:07:14 -- setup/acl.sh@13 -- # declare -A drivers 00:03:39.792 21:07:14 -- setup/acl.sh@51 -- # setup reset 00:03:39.792 21:07:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.792 21:07:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.984 21:07:18 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:43.984 21:07:18 -- setup/acl.sh@16 -- # local dev driver 00:03:43.984 21:07:18 -- setup/acl.sh@15 -- # setup output status 00:03:43.984 21:07:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.984 21:07:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.984 21:07:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:48.225 Hugepages 00:03:48.225 node hugesize free / total 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # continue 00:03:48.225 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # continue 00:03:48.225 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # continue 00:03:48.225 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.225 00:03:48.225 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # continue 00:03:48.225 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:48.225 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.225 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.225 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:48.225 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.225 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.225 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:48.225 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.225 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.225 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.225 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # continue 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.226 21:07:22 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:48.226 21:07:22 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.226 21:07:22 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.226 21:07:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.226 21:07:22 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:48.226 21:07:22 -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.226 21:07:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.226 21:07:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.226 21:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:48.226 ************************************ 00:03:48.226 START TEST denied 00:03:48.226 ************************************ 00:03:48.226 21:07:22 -- common/autotest_common.sh@1104 -- # denied 00:03:48.226 21:07:22 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:48.226 21:07:22 -- setup/acl.sh@38 -- # setup output config 00:03:48.226 21:07:22 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:48.226 21:07:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.226 21:07:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:52.420 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:52.420 21:07:26 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:52.420 21:07:26 -- setup/acl.sh@28 -- # local dev driver 00:03:52.420 21:07:26 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:52.420 21:07:26 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:52.420 21:07:26 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:52.420 21:07:26 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:52.420 21:07:26 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:52.420 21:07:26 -- setup/acl.sh@41 -- # setup reset 00:03:52.420 21:07:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.420 21:07:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.698 00:03:57.698 real 0m9.410s 00:03:57.698 user 0m3.041s 00:03:57.698 sys 0m5.798s 00:03:57.698 21:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.698 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:03:57.698 ************************************ 00:03:57.698 END TEST denied 00:03:57.698 ************************************ 00:03:57.698 21:07:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:57.698 21:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.698 21:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.698 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:03:57.698 ************************************ 00:03:57.698 START TEST allowed 00:03:57.698 ************************************ 00:03:57.698 21:07:32 -- common/autotest_common.sh@1104 -- # allowed 00:03:57.698 21:07:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:57.698 21:07:32 -- setup/acl.sh@45 -- # setup output config 00:03:57.698 21:07:32 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:57.698 21:07:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.698 21:07:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:04.271 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.271 21:07:37 -- setup/acl.sh@47 -- # verify 00:04:04.271 21:07:37 -- setup/acl.sh@28 -- # local dev driver 00:04:04.271 21:07:37 -- setup/acl.sh@48 -- # setup reset 00:04:04.271 21:07:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.271 21:07:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.561 00:04:07.561 real 0m10.176s 00:04:07.561 user 0m2.659s 00:04:07.561 sys 0m5.647s 00:04:07.561 21:07:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.561 21:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:07.561 ************************************ 00:04:07.561 END TEST allowed 00:04:07.561 ************************************ 00:04:07.820 00:04:07.820 real 0m28.385s 00:04:07.820 user 0m8.761s 00:04:07.820 sys 0m17.502s 00:04:07.820 21:07:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.820 21:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:07.820 ************************************ 00:04:07.820 END TEST acl 00:04:07.820 ************************************ 00:04:07.821 21:07:42 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.821 21:07:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.821 21:07:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.821 21:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:07.821 ************************************ 00:04:07.821 START TEST hugepages 00:04:07.821 ************************************ 00:04:07.821 21:07:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.821 * Looking for test storage... 00:04:07.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:07.821 21:07:42 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:07.821 21:07:42 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:07.821 21:07:42 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:07.821 21:07:42 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:07.821 21:07:42 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:07.821 21:07:42 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:07.821 21:07:42 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:07.821 21:07:42 -- setup/common.sh@18 -- # local node= 00:04:07.821 21:07:42 -- setup/common.sh@19 -- # local var val 00:04:07.821 21:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.821 21:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.821 21:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.821 21:07:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.821 21:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.821 21:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 34448316 kB' 'MemAvailable: 39593312 kB' 'Buffers: 4096 kB' 'Cached: 17043772 kB' 'SwapCached: 0 kB' 'Active: 12866428 kB' 'Inactive: 4709516 kB' 'Active(anon): 12388072 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531492 kB' 'Mapped: 213600 kB' 'Shmem: 11859996 kB' 'KReclaimable: 604616 kB' 'Slab: 1318468 kB' 'SReclaimable: 604616 kB' 'SUnreclaim: 713852 kB' 'KernelStack: 22704 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 13886428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.821 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # continue 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 21:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 21:07:42 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.822 21:07:42 -- setup/common.sh@33 -- # echo 2048 00:04:07.822 21:07:42 -- setup/common.sh@33 -- # return 0 00:04:07.822 21:07:42 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:07.822 21:07:42 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:07.822 21:07:42 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:07.822 21:07:42 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:07.822 21:07:42 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:07.822 21:07:42 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:07.822 21:07:42 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:07.822 21:07:42 -- setup/hugepages.sh@207 -- # get_nodes 00:04:07.822 21:07:42 -- setup/hugepages.sh@27 -- # local node 00:04:07.822 21:07:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.822 21:07:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:07.822 21:07:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.822 21:07:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.822 21:07:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.822 21:07:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.822 21:07:42 -- setup/hugepages.sh@208 -- # clear_hp 00:04:07.822 21:07:42 -- setup/hugepages.sh@37 -- # local node hp 00:04:07.822 21:07:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.822 21:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.822 21:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.822 21:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.822 21:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.822 21:07:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.822 21:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.822 21:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.822 21:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.822 21:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.822 21:07:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.822 21:07:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.822 21:07:42 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:07.822 21:07:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.822 21:07:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.822 21:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:07.822 ************************************ 00:04:07.822 START TEST default_setup 00:04:07.822 ************************************ 00:04:07.822 21:07:42 -- common/autotest_common.sh@1104 -- # default_setup 00:04:07.822 21:07:42 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:07.822 21:07:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.822 21:07:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.822 21:07:42 -- setup/hugepages.sh@51 -- # shift 00:04:07.822 21:07:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.822 21:07:42 -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.822 21:07:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.822 21:07:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.822 21:07:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.822 21:07:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.822 21:07:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.822 21:07:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.822 21:07:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.822 21:07:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.822 21:07:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.822 21:07:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.823 21:07:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.823 21:07:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.823 21:07:42 -- setup/hugepages.sh@73 -- # return 0 00:04:07.823 21:07:42 -- setup/hugepages.sh@137 -- # setup output 00:04:07.823 21:07:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.823 21:07:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:12.016 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.016 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.992 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.992 21:07:48 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:13.992 21:07:48 -- setup/hugepages.sh@89 -- # local node 00:04:13.992 21:07:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.992 21:07:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.992 21:07:48 -- setup/hugepages.sh@92 -- # local surp 00:04:13.992 21:07:48 -- setup/hugepages.sh@93 -- # local resv 00:04:13.992 21:07:48 -- setup/hugepages.sh@94 -- # local anon 00:04:13.992 21:07:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.992 21:07:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.992 21:07:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.992 21:07:48 -- setup/common.sh@18 -- # local node= 00:04:13.992 21:07:48 -- setup/common.sh@19 -- # local var val 00:04:13.992 21:07:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.992 21:07:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.992 21:07:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.992 21:07:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.992 21:07:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.992 21:07:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36673892 kB' 'MemAvailable: 41818824 kB' 'Buffers: 4096 kB' 'Cached: 17043908 kB' 'SwapCached: 0 kB' 'Active: 12879904 kB' 'Inactive: 4709516 kB' 'Active(anon): 12401548 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545000 kB' 'Mapped: 213736 kB' 'Shmem: 11860132 kB' 'KReclaimable: 604552 kB' 'Slab: 1316200 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711648 kB' 'KernelStack: 22864 kB' 'PageTables: 9888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13896708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220852 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.992 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.992 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.993 21:07:48 -- setup/common.sh@33 -- # echo 0 00:04:13.993 21:07:48 -- setup/common.sh@33 -- # return 0 00:04:13.993 21:07:48 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.993 21:07:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.993 21:07:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.993 21:07:48 -- setup/common.sh@18 -- # local node= 00:04:13.993 21:07:48 -- setup/common.sh@19 -- # local var val 00:04:13.993 21:07:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.993 21:07:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.993 21:07:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.993 21:07:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.993 21:07:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.993 21:07:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36679260 kB' 'MemAvailable: 41824192 kB' 'Buffers: 4096 kB' 'Cached: 17043912 kB' 'SwapCached: 0 kB' 'Active: 12880176 kB' 'Inactive: 4709516 kB' 'Active(anon): 12401820 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545252 kB' 'Mapped: 213736 kB' 'Shmem: 11860136 kB' 'KReclaimable: 604552 kB' 'Slab: 1316112 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711560 kB' 'KernelStack: 22896 kB' 'PageTables: 9764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13896720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.993 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.993 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.994 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.994 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.995 21:07:48 -- setup/common.sh@33 -- # echo 0 00:04:13.995 21:07:48 -- setup/common.sh@33 -- # return 0 00:04:13.995 21:07:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.995 21:07:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.995 21:07:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.995 21:07:48 -- setup/common.sh@18 -- # local node= 00:04:13.995 21:07:48 -- setup/common.sh@19 -- # local var val 00:04:13.995 21:07:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.995 21:07:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.995 21:07:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.995 21:07:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.995 21:07:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.995 21:07:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36680096 kB' 'MemAvailable: 41825028 kB' 'Buffers: 4096 kB' 'Cached: 17043912 kB' 'SwapCached: 0 kB' 'Active: 12879664 kB' 'Inactive: 4709516 kB' 'Active(anon): 12401308 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544580 kB' 'Mapped: 213652 kB' 'Shmem: 11860136 kB' 'KReclaimable: 604552 kB' 'Slab: 1316060 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711508 kB' 'KernelStack: 22928 kB' 'PageTables: 9608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13896736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220836 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.995 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.995 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.996 21:07:48 -- setup/common.sh@33 -- # echo 0 00:04:13.996 21:07:48 -- setup/common.sh@33 -- # return 0 00:04:13.996 21:07:48 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.996 21:07:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.996 nr_hugepages=1024 00:04:13.996 21:07:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.996 resv_hugepages=0 00:04:13.996 21:07:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.996 surplus_hugepages=0 00:04:13.996 21:07:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.996 anon_hugepages=0 00:04:13.996 21:07:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.996 21:07:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.996 21:07:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.996 21:07:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.996 21:07:48 -- setup/common.sh@18 -- # local node= 00:04:13.996 21:07:48 -- setup/common.sh@19 -- # local var val 00:04:13.996 21:07:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.996 21:07:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.996 21:07:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.996 21:07:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.996 21:07:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.996 21:07:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36683876 kB' 'MemAvailable: 41828808 kB' 'Buffers: 4096 kB' 'Cached: 17043916 kB' 'SwapCached: 0 kB' 'Active: 12879080 kB' 'Inactive: 4709516 kB' 'Active(anon): 12400724 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543984 kB' 'Mapped: 213652 kB' 'Shmem: 11860140 kB' 'KReclaimable: 604552 kB' 'Slab: 1316252 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711700 kB' 'KernelStack: 22736 kB' 'PageTables: 9492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13896596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220852 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.996 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.996 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.997 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.997 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.998 21:07:48 -- setup/common.sh@33 -- # echo 1024 00:04:13.998 21:07:48 -- setup/common.sh@33 -- # return 0 00:04:13.998 21:07:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.998 21:07:48 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.998 21:07:48 -- setup/hugepages.sh@27 -- # local node 00:04:13.998 21:07:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.998 21:07:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.998 21:07:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.998 21:07:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:13.998 21:07:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.998 21:07:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.998 21:07:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.998 21:07:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.998 21:07:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.998 21:07:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.998 21:07:48 -- setup/common.sh@18 -- # local node=0 00:04:13.998 21:07:48 -- setup/common.sh@19 -- # local var val 00:04:13.998 21:07:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.998 21:07:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.998 21:07:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.998 21:07:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.998 21:07:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.998 21:07:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21966208 kB' 'MemUsed: 10625876 kB' 'SwapCached: 0 kB' 'Active: 6562332 kB' 'Inactive: 569080 kB' 'Active(anon): 6285020 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6971848 kB' 'Mapped: 85336 kB' 'AnonPages: 162672 kB' 'Shmem: 6125456 kB' 'KernelStack: 11976 kB' 'PageTables: 5676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389720 kB' 'Slab: 728968 kB' 'SReclaimable: 389720 kB' 'SUnreclaim: 339248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.998 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.998 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # continue 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.999 21:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.999 21:07:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.999 21:07:48 -- setup/common.sh@33 -- # echo 0 00:04:13.999 21:07:48 -- setup/common.sh@33 -- # return 0 00:04:13.999 21:07:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.999 21:07:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.999 21:07:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.999 21:07:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.999 21:07:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.999 node0=1024 expecting 1024 00:04:13.999 21:07:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.999 00:04:13.999 real 0m6.143s 00:04:13.999 user 0m1.477s 00:04:13.999 sys 0m2.769s 00:04:13.999 21:07:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.999 21:07:48 -- common/autotest_common.sh@10 -- # set +x 00:04:13.999 ************************************ 00:04:13.999 END TEST default_setup 00:04:13.999 ************************************ 00:04:14.259 21:07:48 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:14.259 21:07:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.259 21:07:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.259 21:07:48 -- common/autotest_common.sh@10 -- # set +x 00:04:14.259 ************************************ 00:04:14.259 START TEST per_node_1G_alloc 00:04:14.259 ************************************ 00:04:14.259 21:07:48 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:14.259 21:07:48 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:14.259 21:07:48 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:14.259 21:07:48 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.259 21:07:48 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:14.259 21:07:48 -- setup/hugepages.sh@51 -- # shift 00:04:14.259 21:07:48 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:14.259 21:07:48 -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.259 21:07:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.259 21:07:48 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.259 21:07:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:14.259 21:07:48 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:14.259 21:07:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.259 21:07:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.259 21:07:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.259 21:07:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.259 21:07:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.259 21:07:48 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:14.259 21:07:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.259 21:07:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:14.259 21:07:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.259 21:07:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:14.259 21:07:48 -- setup/hugepages.sh@73 -- # return 0 00:04:14.259 21:07:48 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:14.259 21:07:48 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:14.259 21:07:48 -- setup/hugepages.sh@146 -- # setup output 00:04:14.259 21:07:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.259 21:07:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:18.461 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.461 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.461 21:07:52 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:18.461 21:07:52 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:18.461 21:07:52 -- setup/hugepages.sh@89 -- # local node 00:04:18.461 21:07:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.461 21:07:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.461 21:07:52 -- setup/hugepages.sh@92 -- # local surp 00:04:18.461 21:07:52 -- setup/hugepages.sh@93 -- # local resv 00:04:18.461 21:07:52 -- setup/hugepages.sh@94 -- # local anon 00:04:18.461 21:07:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.461 21:07:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.461 21:07:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.461 21:07:52 -- setup/common.sh@18 -- # local node= 00:04:18.461 21:07:52 -- setup/common.sh@19 -- # local var val 00:04:18.461 21:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.461 21:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.461 21:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.461 21:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.461 21:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.461 21:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.461 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.461 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36680628 kB' 'MemAvailable: 41825560 kB' 'Buffers: 4096 kB' 'Cached: 17044040 kB' 'SwapCached: 0 kB' 'Active: 12878436 kB' 'Inactive: 4709516 kB' 'Active(anon): 12400080 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542980 kB' 'Mapped: 212508 kB' 'Shmem: 11860264 kB' 'KReclaimable: 604552 kB' 'Slab: 1317084 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712532 kB' 'KernelStack: 22544 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13886548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220868 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.462 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.462 21:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.463 21:07:52 -- setup/common.sh@33 -- # echo 0 00:04:18.463 21:07:52 -- setup/common.sh@33 -- # return 0 00:04:18.463 21:07:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:18.463 21:07:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.463 21:07:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.463 21:07:52 -- setup/common.sh@18 -- # local node= 00:04:18.463 21:07:52 -- setup/common.sh@19 -- # local var val 00:04:18.463 21:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.463 21:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.463 21:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.463 21:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.463 21:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.463 21:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36683936 kB' 'MemAvailable: 41828868 kB' 'Buffers: 4096 kB' 'Cached: 17044040 kB' 'SwapCached: 0 kB' 'Active: 12879172 kB' 'Inactive: 4709516 kB' 'Active(anon): 12400816 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543784 kB' 'Mapped: 212508 kB' 'Shmem: 11860264 kB' 'KReclaimable: 604552 kB' 'Slab: 1317056 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712504 kB' 'KernelStack: 22544 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13886676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.463 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.463 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.464 21:07:52 -- setup/common.sh@33 -- # echo 0 00:04:18.464 21:07:52 -- setup/common.sh@33 -- # return 0 00:04:18.464 21:07:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:18.464 21:07:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.464 21:07:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.464 21:07:52 -- setup/common.sh@18 -- # local node= 00:04:18.464 21:07:52 -- setup/common.sh@19 -- # local var val 00:04:18.464 21:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.464 21:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.464 21:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.464 21:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.464 21:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.464 21:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36686496 kB' 'MemAvailable: 41831428 kB' 'Buffers: 4096 kB' 'Cached: 17044056 kB' 'SwapCached: 0 kB' 'Active: 12878288 kB' 'Inactive: 4709516 kB' 'Active(anon): 12399932 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542964 kB' 'Mapped: 212508 kB' 'Shmem: 11860280 kB' 'KReclaimable: 604552 kB' 'Slab: 1317112 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712560 kB' 'KernelStack: 22544 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13886572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.464 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.464 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.465 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.465 21:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.466 21:07:52 -- setup/common.sh@33 -- # echo 0 00:04:18.466 21:07:52 -- setup/common.sh@33 -- # return 0 00:04:18.466 21:07:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:18.466 21:07:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.466 nr_hugepages=1024 00:04:18.466 21:07:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.466 resv_hugepages=0 00:04:18.466 21:07:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.466 surplus_hugepages=0 00:04:18.466 21:07:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.466 anon_hugepages=0 00:04:18.466 21:07:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.466 21:07:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.466 21:07:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.466 21:07:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.466 21:07:52 -- setup/common.sh@18 -- # local node= 00:04:18.466 21:07:52 -- setup/common.sh@19 -- # local var val 00:04:18.466 21:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.466 21:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.466 21:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.466 21:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.466 21:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.466 21:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36687256 kB' 'MemAvailable: 41832188 kB' 'Buffers: 4096 kB' 'Cached: 17044068 kB' 'SwapCached: 0 kB' 'Active: 12878400 kB' 'Inactive: 4709516 kB' 'Active(anon): 12400044 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543036 kB' 'Mapped: 212508 kB' 'Shmem: 11860292 kB' 'KReclaimable: 604552 kB' 'Slab: 1317112 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712560 kB' 'KernelStack: 22544 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13886588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.466 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.466 21:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.467 21:07:53 -- setup/common.sh@33 -- # echo 1024 00:04:18.467 21:07:53 -- setup/common.sh@33 -- # return 0 00:04:18.467 21:07:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.467 21:07:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.467 21:07:53 -- setup/hugepages.sh@27 -- # local node 00:04:18.467 21:07:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.467 21:07:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.467 21:07:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.467 21:07:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.467 21:07:53 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.467 21:07:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.467 21:07:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.467 21:07:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.467 21:07:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.467 21:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.467 21:07:53 -- setup/common.sh@18 -- # local node=0 00:04:18.467 21:07:53 -- setup/common.sh@19 -- # local var val 00:04:18.467 21:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.467 21:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.467 21:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.467 21:07:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.467 21:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.467 21:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23023136 kB' 'MemUsed: 9568948 kB' 'SwapCached: 0 kB' 'Active: 6562824 kB' 'Inactive: 569080 kB' 'Active(anon): 6285512 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6971936 kB' 'Mapped: 84820 kB' 'AnonPages: 163124 kB' 'Shmem: 6125544 kB' 'KernelStack: 11704 kB' 'PageTables: 4840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389720 kB' 'Slab: 729496 kB' 'SReclaimable: 389720 kB' 'SUnreclaim: 339776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.467 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.467 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.468 21:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.468 21:07:53 -- setup/common.sh@33 -- # echo 0 00:04:18.468 21:07:53 -- setup/common.sh@33 -- # return 0 00:04:18.468 21:07:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.468 21:07:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.468 21:07:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.468 21:07:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.468 21:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.468 21:07:53 -- setup/common.sh@18 -- # local node=1 00:04:18.468 21:07:53 -- setup/common.sh@19 -- # local var val 00:04:18.468 21:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.468 21:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.468 21:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.468 21:07:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.468 21:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.468 21:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.468 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13664276 kB' 'MemUsed: 14038832 kB' 'SwapCached: 0 kB' 'Active: 6315224 kB' 'Inactive: 4140436 kB' 'Active(anon): 6114180 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10076244 kB' 'Mapped: 127688 kB' 'AnonPages: 379528 kB' 'Shmem: 5734764 kB' 'KernelStack: 10824 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214832 kB' 'Slab: 587616 kB' 'SReclaimable: 214832 kB' 'SUnreclaim: 372784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # continue 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.469 21:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.469 21:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.469 21:07:53 -- setup/common.sh@33 -- # echo 0 00:04:18.469 21:07:53 -- setup/common.sh@33 -- # return 0 00:04:18.469 21:07:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.469 21:07:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.469 21:07:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.469 21:07:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.469 21:07:53 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.469 node0=512 expecting 512 00:04:18.469 21:07:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.469 21:07:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.470 21:07:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.470 21:07:53 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:18.470 node1=512 expecting 512 00:04:18.470 21:07:53 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.470 00:04:18.470 real 0m4.173s 00:04:18.470 user 0m1.565s 00:04:18.470 sys 0m2.691s 00:04:18.470 21:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.470 21:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:18.470 ************************************ 00:04:18.470 END TEST per_node_1G_alloc 00:04:18.470 ************************************ 00:04:18.470 21:07:53 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:18.470 21:07:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.470 21:07:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.470 21:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:18.470 ************************************ 00:04:18.470 START TEST even_2G_alloc 00:04:18.470 ************************************ 00:04:18.470 21:07:53 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:18.470 21:07:53 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:18.470 21:07:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.470 21:07:53 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.470 21:07:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.470 21:07:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.470 21:07:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.470 21:07:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.470 21:07:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.470 21:07:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.470 21:07:53 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.470 21:07:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.470 21:07:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.470 21:07:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.470 21:07:53 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.470 21:07:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.470 21:07:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.470 21:07:53 -- setup/hugepages.sh@83 -- # : 512 00:04:18.470 21:07:53 -- setup/hugepages.sh@84 -- # : 1 00:04:18.470 21:07:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.470 21:07:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.470 21:07:53 -- setup/hugepages.sh@83 -- # : 0 00:04:18.470 21:07:53 -- setup/hugepages.sh@84 -- # : 0 00:04:18.470 21:07:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.470 21:07:53 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:18.470 21:07:53 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:18.470 21:07:53 -- setup/hugepages.sh@153 -- # setup output 00:04:18.470 21:07:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.470 21:07:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:22.671 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.671 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.671 21:07:57 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:22.671 21:07:57 -- setup/hugepages.sh@89 -- # local node 00:04:22.671 21:07:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.671 21:07:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.671 21:07:57 -- setup/hugepages.sh@92 -- # local surp 00:04:22.671 21:07:57 -- setup/hugepages.sh@93 -- # local resv 00:04:22.671 21:07:57 -- setup/hugepages.sh@94 -- # local anon 00:04:22.671 21:07:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.671 21:07:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.671 21:07:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.671 21:07:57 -- setup/common.sh@18 -- # local node= 00:04:22.671 21:07:57 -- setup/common.sh@19 -- # local var val 00:04:22.671 21:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.671 21:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.671 21:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.671 21:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.671 21:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.671 21:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36699220 kB' 'MemAvailable: 41844152 kB' 'Buffers: 4096 kB' 'Cached: 17044192 kB' 'SwapCached: 0 kB' 'Active: 12879020 kB' 'Inactive: 4709516 kB' 'Active(anon): 12400664 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543580 kB' 'Mapped: 212516 kB' 'Shmem: 11860416 kB' 'KReclaimable: 604552 kB' 'Slab: 1316568 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712016 kB' 'KernelStack: 22512 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13887344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.671 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.671 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 21:07:57 -- setup/common.sh@33 -- # echo 0 00:04:22.672 21:07:57 -- setup/common.sh@33 -- # return 0 00:04:22.672 21:07:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.672 21:07:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.672 21:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.672 21:07:57 -- setup/common.sh@18 -- # local node= 00:04:22.672 21:07:57 -- setup/common.sh@19 -- # local var val 00:04:22.672 21:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.672 21:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.672 21:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.672 21:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.672 21:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.672 21:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36699328 kB' 'MemAvailable: 41844260 kB' 'Buffers: 4096 kB' 'Cached: 17044192 kB' 'SwapCached: 0 kB' 'Active: 12879636 kB' 'Inactive: 4709516 kB' 'Active(anon): 12401280 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544252 kB' 'Mapped: 212516 kB' 'Shmem: 11860416 kB' 'KReclaimable: 604552 kB' 'Slab: 1316524 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711972 kB' 'KernelStack: 22496 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13887356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.672 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 21:07:57 -- setup/common.sh@33 -- # echo 0 00:04:22.673 21:07:57 -- setup/common.sh@33 -- # return 0 00:04:22.673 21:07:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.673 21:07:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.673 21:07:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.673 21:07:57 -- setup/common.sh@18 -- # local node= 00:04:22.673 21:07:57 -- setup/common.sh@19 -- # local var val 00:04:22.673 21:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.673 21:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.673 21:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.673 21:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.673 21:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.673 21:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36700656 kB' 'MemAvailable: 41845588 kB' 'Buffers: 4096 kB' 'Cached: 17044204 kB' 'SwapCached: 0 kB' 'Active: 12878520 kB' 'Inactive: 4709516 kB' 'Active(anon): 12400164 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543044 kB' 'Mapped: 212512 kB' 'Shmem: 11860428 kB' 'KReclaimable: 604552 kB' 'Slab: 1316572 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712020 kB' 'KernelStack: 22528 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13887368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.674 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 21:07:57 -- setup/common.sh@33 -- # echo 0 00:04:22.675 21:07:57 -- setup/common.sh@33 -- # return 0 00:04:22.675 21:07:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.675 21:07:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.675 nr_hugepages=1024 00:04:22.675 21:07:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.675 resv_hugepages=0 00:04:22.675 21:07:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.675 surplus_hugepages=0 00:04:22.675 21:07:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.675 anon_hugepages=0 00:04:22.675 21:07:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.675 21:07:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.675 21:07:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.675 21:07:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.675 21:07:57 -- setup/common.sh@18 -- # local node= 00:04:22.675 21:07:57 -- setup/common.sh@19 -- # local var val 00:04:22.675 21:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.675 21:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.675 21:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.675 21:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.675 21:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.675 21:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36700768 kB' 'MemAvailable: 41845700 kB' 'Buffers: 4096 kB' 'Cached: 17044220 kB' 'SwapCached: 0 kB' 'Active: 12879180 kB' 'Inactive: 4709516 kB' 'Active(anon): 12400824 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543684 kB' 'Mapped: 212512 kB' 'Shmem: 11860444 kB' 'KReclaimable: 604552 kB' 'Slab: 1316580 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712028 kB' 'KernelStack: 22528 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13887384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 21:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.676 21:07:57 -- setup/common.sh@33 -- # echo 1024 00:04:22.676 21:07:57 -- setup/common.sh@33 -- # return 0 00:04:22.676 21:07:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.676 21:07:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.676 21:07:57 -- setup/hugepages.sh@27 -- # local node 00:04:22.676 21:07:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.676 21:07:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.676 21:07:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.676 21:07:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.677 21:07:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.677 21:07:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.677 21:07:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.677 21:07:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.677 21:07:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.677 21:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.677 21:07:57 -- setup/common.sh@18 -- # local node=0 00:04:22.677 21:07:57 -- setup/common.sh@19 -- # local var val 00:04:22.677 21:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.677 21:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.677 21:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.677 21:07:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.677 21:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.677 21:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23032168 kB' 'MemUsed: 9559916 kB' 'SwapCached: 0 kB' 'Active: 6562940 kB' 'Inactive: 569080 kB' 'Active(anon): 6285628 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6972020 kB' 'Mapped: 84820 kB' 'AnonPages: 163156 kB' 'Shmem: 6125628 kB' 'KernelStack: 11720 kB' 'PageTables: 4880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389720 kB' 'Slab: 728992 kB' 'SReclaimable: 389720 kB' 'SUnreclaim: 339272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 21:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@33 -- # echo 0 00:04:22.678 21:07:57 -- setup/common.sh@33 -- # return 0 00:04:22.678 21:07:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.678 21:07:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.678 21:07:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.678 21:07:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.678 21:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.678 21:07:57 -- setup/common.sh@18 -- # local node=1 00:04:22.678 21:07:57 -- setup/common.sh@19 -- # local var val 00:04:22.678 21:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.678 21:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.678 21:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.678 21:07:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.678 21:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.678 21:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13669176 kB' 'MemUsed: 14033932 kB' 'SwapCached: 0 kB' 'Active: 6315936 kB' 'Inactive: 4140436 kB' 'Active(anon): 6114892 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10076312 kB' 'Mapped: 127692 kB' 'AnonPages: 380236 kB' 'Shmem: 5734832 kB' 'KernelStack: 10808 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214832 kB' 'Slab: 587580 kB' 'SReclaimable: 214832 kB' 'SUnreclaim: 372748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.678 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # continue 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 21:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 21:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.679 21:07:57 -- setup/common.sh@33 -- # echo 0 00:04:22.679 21:07:57 -- setup/common.sh@33 -- # return 0 00:04:22.679 21:07:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.679 21:07:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.679 21:07:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.679 21:07:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.679 node0=512 expecting 512 00:04:22.679 21:07:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.679 21:07:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.679 21:07:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.679 21:07:57 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:22.679 node1=512 expecting 512 00:04:22.679 21:07:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:22.679 00:04:22.679 real 0m4.290s 00:04:22.679 user 0m1.593s 00:04:22.679 sys 0m2.783s 00:04:22.679 21:07:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.679 21:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:22.679 ************************************ 00:04:22.679 END TEST even_2G_alloc 00:04:22.679 ************************************ 00:04:22.679 21:07:57 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:22.679 21:07:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.679 21:07:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.679 21:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:22.679 ************************************ 00:04:22.679 START TEST odd_alloc 00:04:22.679 ************************************ 00:04:22.679 21:07:57 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:22.679 21:07:57 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:22.679 21:07:57 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:22.679 21:07:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:22.679 21:07:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.679 21:07:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.679 21:07:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.679 21:07:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:22.679 21:07:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.679 21:07:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.679 21:07:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.679 21:07:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:22.679 21:07:57 -- setup/hugepages.sh@83 -- # : 513 00:04:22.679 21:07:57 -- setup/hugepages.sh@84 -- # : 1 00:04:22.679 21:07:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:22.679 21:07:57 -- setup/hugepages.sh@83 -- # : 0 00:04:22.679 21:07:57 -- setup/hugepages.sh@84 -- # : 0 00:04:22.679 21:07:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.679 21:07:57 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:22.679 21:07:57 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:22.679 21:07:57 -- setup/hugepages.sh@160 -- # setup output 00:04:22.679 21:07:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.679 21:07:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:26.877 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.877 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.877 21:08:01 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:26.877 21:08:01 -- setup/hugepages.sh@89 -- # local node 00:04:26.877 21:08:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.877 21:08:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.877 21:08:01 -- setup/hugepages.sh@92 -- # local surp 00:04:26.877 21:08:01 -- setup/hugepages.sh@93 -- # local resv 00:04:26.877 21:08:01 -- setup/hugepages.sh@94 -- # local anon 00:04:26.877 21:08:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.877 21:08:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.877 21:08:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.877 21:08:01 -- setup/common.sh@18 -- # local node= 00:04:26.877 21:08:01 -- setup/common.sh@19 -- # local var val 00:04:26.877 21:08:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.877 21:08:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.877 21:08:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.877 21:08:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.877 21:08:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.877 21:08:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36683096 kB' 'MemAvailable: 41828028 kB' 'Buffers: 4096 kB' 'Cached: 17044324 kB' 'SwapCached: 0 kB' 'Active: 12881016 kB' 'Inactive: 4709516 kB' 'Active(anon): 12402660 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545288 kB' 'Mapped: 212556 kB' 'Shmem: 11860548 kB' 'KReclaimable: 604552 kB' 'Slab: 1316208 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711656 kB' 'KernelStack: 22688 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 13892548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220980 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.877 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.877 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.878 21:08:01 -- setup/common.sh@33 -- # echo 0 00:04:26.878 21:08:01 -- setup/common.sh@33 -- # return 0 00:04:26.878 21:08:01 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.878 21:08:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.878 21:08:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.878 21:08:01 -- setup/common.sh@18 -- # local node= 00:04:26.878 21:08:01 -- setup/common.sh@19 -- # local var val 00:04:26.878 21:08:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.878 21:08:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.878 21:08:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.878 21:08:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.878 21:08:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.878 21:08:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36682248 kB' 'MemAvailable: 41827180 kB' 'Buffers: 4096 kB' 'Cached: 17044324 kB' 'SwapCached: 0 kB' 'Active: 12881824 kB' 'Inactive: 4709516 kB' 'Active(anon): 12403468 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546132 kB' 'Mapped: 212556 kB' 'Shmem: 11860548 kB' 'KReclaimable: 604552 kB' 'Slab: 1316208 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711656 kB' 'KernelStack: 22800 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 13892560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220948 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.878 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.878 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.879 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.879 21:08:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.880 21:08:01 -- setup/common.sh@33 -- # echo 0 00:04:26.880 21:08:01 -- setup/common.sh@33 -- # return 0 00:04:26.880 21:08:01 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.880 21:08:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.880 21:08:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.880 21:08:01 -- setup/common.sh@18 -- # local node= 00:04:26.880 21:08:01 -- setup/common.sh@19 -- # local var val 00:04:26.880 21:08:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.880 21:08:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.880 21:08:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.880 21:08:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.880 21:08:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.880 21:08:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36682316 kB' 'MemAvailable: 41827248 kB' 'Buffers: 4096 kB' 'Cached: 17044332 kB' 'SwapCached: 0 kB' 'Active: 12883636 kB' 'Inactive: 4709516 kB' 'Active(anon): 12405280 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548004 kB' 'Mapped: 213060 kB' 'Shmem: 11860556 kB' 'KReclaimable: 604552 kB' 'Slab: 1316336 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711784 kB' 'KernelStack: 22704 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 13893632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220852 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.880 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.880 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.881 21:08:01 -- setup/common.sh@33 -- # echo 0 00:04:26.881 21:08:01 -- setup/common.sh@33 -- # return 0 00:04:26.881 21:08:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.881 21:08:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:26.881 nr_hugepages=1025 00:04:26.881 21:08:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.881 resv_hugepages=0 00:04:26.881 21:08:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.881 surplus_hugepages=0 00:04:26.881 21:08:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.881 anon_hugepages=0 00:04:26.881 21:08:01 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.881 21:08:01 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:26.881 21:08:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.881 21:08:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.881 21:08:01 -- setup/common.sh@18 -- # local node= 00:04:26.881 21:08:01 -- setup/common.sh@19 -- # local var val 00:04:26.881 21:08:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.881 21:08:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.881 21:08:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.881 21:08:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.881 21:08:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.881 21:08:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36688876 kB' 'MemAvailable: 41833808 kB' 'Buffers: 4096 kB' 'Cached: 17044348 kB' 'SwapCached: 0 kB' 'Active: 12881516 kB' 'Inactive: 4709516 kB' 'Active(anon): 12403160 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545848 kB' 'Mapped: 212864 kB' 'Shmem: 11860572 kB' 'KReclaimable: 604552 kB' 'Slab: 1316376 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711824 kB' 'KernelStack: 22704 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 13892840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220836 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.881 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.881 21:08:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.882 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.882 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.883 21:08:01 -- setup/common.sh@33 -- # echo 1025 00:04:26.883 21:08:01 -- setup/common.sh@33 -- # return 0 00:04:26.883 21:08:01 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.883 21:08:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.883 21:08:01 -- setup/hugepages.sh@27 -- # local node 00:04:26.883 21:08:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.883 21:08:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.883 21:08:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.883 21:08:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:26.883 21:08:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.883 21:08:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.883 21:08:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.883 21:08:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.883 21:08:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.883 21:08:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.883 21:08:01 -- setup/common.sh@18 -- # local node=0 00:04:26.883 21:08:01 -- setup/common.sh@19 -- # local var val 00:04:26.883 21:08:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.883 21:08:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.883 21:08:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.883 21:08:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.883 21:08:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.883 21:08:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23026388 kB' 'MemUsed: 9565696 kB' 'SwapCached: 0 kB' 'Active: 6569436 kB' 'Inactive: 569080 kB' 'Active(anon): 6292124 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6972120 kB' 'Mapped: 84820 kB' 'AnonPages: 169576 kB' 'Shmem: 6125728 kB' 'KernelStack: 11832 kB' 'PageTables: 5144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389720 kB' 'Slab: 728716 kB' 'SReclaimable: 389720 kB' 'SUnreclaim: 338996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.883 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.883 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@33 -- # echo 0 00:04:26.884 21:08:01 -- setup/common.sh@33 -- # return 0 00:04:26.884 21:08:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.884 21:08:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.884 21:08:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.884 21:08:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.884 21:08:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.884 21:08:01 -- setup/common.sh@18 -- # local node=1 00:04:26.884 21:08:01 -- setup/common.sh@19 -- # local var val 00:04:26.884 21:08:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.884 21:08:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.884 21:08:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.884 21:08:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.884 21:08:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.884 21:08:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13655332 kB' 'MemUsed: 14047776 kB' 'SwapCached: 0 kB' 'Active: 6315828 kB' 'Inactive: 4140436 kB' 'Active(anon): 6114784 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10076356 kB' 'Mapped: 127884 kB' 'AnonPages: 379988 kB' 'Shmem: 5734876 kB' 'KernelStack: 10872 kB' 'PageTables: 3512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214832 kB' 'Slab: 587660 kB' 'SReclaimable: 214832 kB' 'SUnreclaim: 372828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.884 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.884 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # continue 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.885 21:08:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.885 21:08:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.885 21:08:01 -- setup/common.sh@33 -- # echo 0 00:04:26.885 21:08:01 -- setup/common.sh@33 -- # return 0 00:04:26.885 21:08:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.885 21:08:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.885 21:08:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.885 21:08:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:26.885 node0=512 expecting 513 00:04:26.885 21:08:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.885 21:08:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.885 21:08:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.885 21:08:01 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:26.885 node1=513 expecting 512 00:04:26.885 21:08:01 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:26.885 00:04:26.885 real 0m4.203s 00:04:26.885 user 0m1.465s 00:04:26.885 sys 0m2.735s 00:04:26.885 21:08:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.885 21:08:01 -- common/autotest_common.sh@10 -- # set +x 00:04:26.885 ************************************ 00:04:26.885 END TEST odd_alloc 00:04:26.885 ************************************ 00:04:26.885 21:08:01 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:26.885 21:08:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.885 21:08:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.885 21:08:01 -- common/autotest_common.sh@10 -- # set +x 00:04:26.885 ************************************ 00:04:26.885 START TEST custom_alloc 00:04:26.885 ************************************ 00:04:26.885 21:08:01 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:26.885 21:08:01 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:26.885 21:08:01 -- setup/hugepages.sh@169 -- # local node 00:04:26.885 21:08:01 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:26.885 21:08:01 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:26.885 21:08:01 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:26.885 21:08:01 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:26.885 21:08:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.885 21:08:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.885 21:08:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.885 21:08:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.885 21:08:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.885 21:08:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.885 21:08:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.885 21:08:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.885 21:08:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.885 21:08:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:26.885 21:08:01 -- setup/hugepages.sh@83 -- # : 256 00:04:26.885 21:08:01 -- setup/hugepages.sh@84 -- # : 1 00:04:26.885 21:08:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:26.885 21:08:01 -- setup/hugepages.sh@83 -- # : 0 00:04:26.885 21:08:01 -- setup/hugepages.sh@84 -- # : 0 00:04:26.885 21:08:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:26.885 21:08:01 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:26.885 21:08:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.885 21:08:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.885 21:08:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.885 21:08:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.885 21:08:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.885 21:08:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.885 21:08:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.885 21:08:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.885 21:08:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.885 21:08:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.885 21:08:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.885 21:08:01 -- setup/hugepages.sh@78 -- # return 0 00:04:26.885 21:08:01 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:26.885 21:08:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.885 21:08:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.885 21:08:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.885 21:08:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.885 21:08:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.885 21:08:01 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:26.886 21:08:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.886 21:08:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.886 21:08:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.886 21:08:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.886 21:08:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.886 21:08:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.886 21:08:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.886 21:08:01 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:26.886 21:08:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.886 21:08:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.886 21:08:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.886 21:08:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:26.886 21:08:01 -- setup/hugepages.sh@78 -- # return 0 00:04:26.886 21:08:01 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:26.886 21:08:01 -- setup/hugepages.sh@187 -- # setup output 00:04:26.886 21:08:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.886 21:08:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:31.079 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.079 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.080 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.080 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.080 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.080 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:31.080 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.080 21:08:05 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:31.080 21:08:05 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:31.080 21:08:05 -- setup/hugepages.sh@89 -- # local node 00:04:31.080 21:08:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.080 21:08:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.080 21:08:05 -- setup/hugepages.sh@92 -- # local surp 00:04:31.080 21:08:05 -- setup/hugepages.sh@93 -- # local resv 00:04:31.080 21:08:05 -- setup/hugepages.sh@94 -- # local anon 00:04:31.080 21:08:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.080 21:08:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.080 21:08:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.080 21:08:05 -- setup/common.sh@18 -- # local node= 00:04:31.080 21:08:05 -- setup/common.sh@19 -- # local var val 00:04:31.080 21:08:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.080 21:08:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.080 21:08:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.080 21:08:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.080 21:08:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.080 21:08:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35644508 kB' 'MemAvailable: 40789440 kB' 'Buffers: 4096 kB' 'Cached: 17044468 kB' 'SwapCached: 0 kB' 'Active: 12881412 kB' 'Inactive: 4709516 kB' 'Active(anon): 12403056 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545632 kB' 'Mapped: 212532 kB' 'Shmem: 11860692 kB' 'KReclaimable: 604552 kB' 'Slab: 1316220 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711668 kB' 'KernelStack: 22576 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 13888800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.080 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.080 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.081 21:08:05 -- setup/common.sh@33 -- # echo 0 00:04:31.081 21:08:05 -- setup/common.sh@33 -- # return 0 00:04:31.081 21:08:05 -- setup/hugepages.sh@97 -- # anon=0 00:04:31.081 21:08:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.081 21:08:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.081 21:08:05 -- setup/common.sh@18 -- # local node= 00:04:31.081 21:08:05 -- setup/common.sh@19 -- # local var val 00:04:31.081 21:08:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.081 21:08:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.081 21:08:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.081 21:08:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.081 21:08:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.081 21:08:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35644652 kB' 'MemAvailable: 40789584 kB' 'Buffers: 4096 kB' 'Cached: 17044472 kB' 'SwapCached: 0 kB' 'Active: 12881384 kB' 'Inactive: 4709516 kB' 'Active(anon): 12403028 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545604 kB' 'Mapped: 212524 kB' 'Shmem: 11860696 kB' 'KReclaimable: 604552 kB' 'Slab: 1316296 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711744 kB' 'KernelStack: 22544 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 13888812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.081 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.081 21:08:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.082 21:08:05 -- setup/common.sh@33 -- # echo 0 00:04:31.082 21:08:05 -- setup/common.sh@33 -- # return 0 00:04:31.082 21:08:05 -- setup/hugepages.sh@99 -- # surp=0 00:04:31.082 21:08:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.082 21:08:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.082 21:08:05 -- setup/common.sh@18 -- # local node= 00:04:31.082 21:08:05 -- setup/common.sh@19 -- # local var val 00:04:31.082 21:08:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.082 21:08:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.082 21:08:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.082 21:08:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.082 21:08:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.082 21:08:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 21:08:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35644652 kB' 'MemAvailable: 40789584 kB' 'Buffers: 4096 kB' 'Cached: 17044472 kB' 'SwapCached: 0 kB' 'Active: 12881052 kB' 'Inactive: 4709516 kB' 'Active(anon): 12402696 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545272 kB' 'Mapped: 212524 kB' 'Shmem: 11860696 kB' 'KReclaimable: 604552 kB' 'Slab: 1316296 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711744 kB' 'KernelStack: 22528 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 13888828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.082 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.083 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.084 21:08:05 -- setup/common.sh@33 -- # echo 0 00:04:31.084 21:08:05 -- setup/common.sh@33 -- # return 0 00:04:31.084 21:08:05 -- setup/hugepages.sh@100 -- # resv=0 00:04:31.084 21:08:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:31.084 nr_hugepages=1536 00:04:31.084 21:08:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.084 resv_hugepages=0 00:04:31.084 21:08:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.084 surplus_hugepages=0 00:04:31.084 21:08:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.084 anon_hugepages=0 00:04:31.084 21:08:05 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:31.084 21:08:05 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:31.084 21:08:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.084 21:08:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.084 21:08:05 -- setup/common.sh@18 -- # local node= 00:04:31.084 21:08:05 -- setup/common.sh@19 -- # local var val 00:04:31.084 21:08:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.084 21:08:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.084 21:08:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.084 21:08:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.084 21:08:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.084 21:08:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35644656 kB' 'MemAvailable: 40789588 kB' 'Buffers: 4096 kB' 'Cached: 17044508 kB' 'SwapCached: 0 kB' 'Active: 12881052 kB' 'Inactive: 4709516 kB' 'Active(anon): 12402696 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545224 kB' 'Mapped: 212524 kB' 'Shmem: 11860732 kB' 'KReclaimable: 604552 kB' 'Slab: 1316296 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711744 kB' 'KernelStack: 22528 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 13888840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.084 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.085 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.086 21:08:05 -- setup/common.sh@33 -- # echo 1536 00:04:31.086 21:08:05 -- setup/common.sh@33 -- # return 0 00:04:31.086 21:08:05 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:31.086 21:08:05 -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.086 21:08:05 -- setup/hugepages.sh@27 -- # local node 00:04:31.086 21:08:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.086 21:08:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:31.086 21:08:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.086 21:08:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.086 21:08:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.086 21:08:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.086 21:08:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.086 21:08:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.086 21:08:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.086 21:08:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.086 21:08:05 -- setup/common.sh@18 -- # local node=0 00:04:31.086 21:08:05 -- setup/common.sh@19 -- # local var val 00:04:31.086 21:08:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.086 21:08:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.086 21:08:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.086 21:08:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.086 21:08:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.086 21:08:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 23029144 kB' 'MemUsed: 9562940 kB' 'SwapCached: 0 kB' 'Active: 6564060 kB' 'Inactive: 569080 kB' 'Active(anon): 6286748 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6972128 kB' 'Mapped: 84820 kB' 'AnonPages: 164140 kB' 'Shmem: 6125736 kB' 'KernelStack: 11704 kB' 'PageTables: 4828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389720 kB' 'Slab: 728756 kB' 'SReclaimable: 389720 kB' 'SUnreclaim: 339036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@33 -- # echo 0 00:04:31.087 21:08:05 -- setup/common.sh@33 -- # return 0 00:04:31.087 21:08:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.087 21:08:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.087 21:08:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.087 21:08:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:31.087 21:08:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.087 21:08:05 -- setup/common.sh@18 -- # local node=1 00:04:31.087 21:08:05 -- setup/common.sh@19 -- # local var val 00:04:31.087 21:08:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.087 21:08:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.087 21:08:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:31.087 21:08:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:31.087 21:08:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.087 21:08:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 12615260 kB' 'MemUsed: 15087848 kB' 'SwapCached: 0 kB' 'Active: 6317732 kB' 'Inactive: 4140436 kB' 'Active(anon): 6116688 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10076492 kB' 'Mapped: 127704 kB' 'AnonPages: 381796 kB' 'Shmem: 5735012 kB' 'KernelStack: 10856 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214832 kB' 'Slab: 587540 kB' 'SReclaimable: 214832 kB' 'SUnreclaim: 372708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # continue 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 21:08:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 21:08:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.088 21:08:05 -- setup/common.sh@33 -- # echo 0 00:04:31.088 21:08:05 -- setup/common.sh@33 -- # return 0 00:04:31.088 21:08:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.088 21:08:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.088 21:08:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.088 21:08:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.088 21:08:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:31.088 node0=512 expecting 512 00:04:31.088 21:08:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.088 21:08:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.088 21:08:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.088 21:08:05 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:31.088 node1=1024 expecting 1024 00:04:31.088 21:08:05 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:31.088 00:04:31.088 real 0m3.805s 00:04:31.088 user 0m1.314s 00:04:31.088 sys 0m2.488s 00:04:31.088 21:08:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.088 21:08:05 -- common/autotest_common.sh@10 -- # set +x 00:04:31.088 ************************************ 00:04:31.088 END TEST custom_alloc 00:04:31.088 ************************************ 00:04:31.088 21:08:05 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:31.088 21:08:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.088 21:08:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.088 21:08:05 -- common/autotest_common.sh@10 -- # set +x 00:04:31.088 ************************************ 00:04:31.088 START TEST no_shrink_alloc 00:04:31.088 ************************************ 00:04:31.088 21:08:05 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:31.088 21:08:05 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:31.088 21:08:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.088 21:08:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.088 21:08:05 -- setup/hugepages.sh@51 -- # shift 00:04:31.088 21:08:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.088 21:08:05 -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.088 21:08:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.088 21:08:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.088 21:08:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.088 21:08:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.088 21:08:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.088 21:08:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.088 21:08:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.088 21:08:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.088 21:08:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.088 21:08:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.088 21:08:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.088 21:08:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.088 21:08:05 -- setup/hugepages.sh@73 -- # return 0 00:04:31.088 21:08:05 -- setup/hugepages.sh@198 -- # setup output 00:04:31.088 21:08:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.088 21:08:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:35.326 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.326 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.327 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.327 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.327 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.327 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.327 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.327 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.327 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.327 21:08:09 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:35.327 21:08:09 -- setup/hugepages.sh@89 -- # local node 00:04:35.327 21:08:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.327 21:08:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.327 21:08:09 -- setup/hugepages.sh@92 -- # local surp 00:04:35.327 21:08:09 -- setup/hugepages.sh@93 -- # local resv 00:04:35.327 21:08:09 -- setup/hugepages.sh@94 -- # local anon 00:04:35.327 21:08:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.327 21:08:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.327 21:08:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.327 21:08:09 -- setup/common.sh@18 -- # local node= 00:04:35.327 21:08:09 -- setup/common.sh@19 -- # local var val 00:04:35.327 21:08:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.327 21:08:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.327 21:08:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.327 21:08:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.327 21:08:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.327 21:08:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36685864 kB' 'MemAvailable: 41830796 kB' 'Buffers: 4096 kB' 'Cached: 17044600 kB' 'SwapCached: 0 kB' 'Active: 12883240 kB' 'Inactive: 4709516 kB' 'Active(anon): 12404884 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546816 kB' 'Mapped: 212612 kB' 'Shmem: 11860824 kB' 'KReclaimable: 604552 kB' 'Slab: 1316352 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711800 kB' 'KernelStack: 22528 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13889580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.327 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.327 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.328 21:08:09 -- setup/common.sh@33 -- # echo 0 00:04:35.328 21:08:09 -- setup/common.sh@33 -- # return 0 00:04:35.328 21:08:09 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.328 21:08:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.328 21:08:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.328 21:08:09 -- setup/common.sh@18 -- # local node= 00:04:35.328 21:08:09 -- setup/common.sh@19 -- # local var val 00:04:35.328 21:08:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.328 21:08:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.328 21:08:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.328 21:08:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.328 21:08:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.328 21:08:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36686316 kB' 'MemAvailable: 41831248 kB' 'Buffers: 4096 kB' 'Cached: 17044604 kB' 'SwapCached: 0 kB' 'Active: 12882940 kB' 'Inactive: 4709516 kB' 'Active(anon): 12404584 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547032 kB' 'Mapped: 213036 kB' 'Shmem: 11860828 kB' 'KReclaimable: 604552 kB' 'Slab: 1316324 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711772 kB' 'KernelStack: 22496 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13891608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.328 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.328 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.329 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.329 21:08:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.330 21:08:09 -- setup/common.sh@33 -- # echo 0 00:04:35.330 21:08:09 -- setup/common.sh@33 -- # return 0 00:04:35.330 21:08:09 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.330 21:08:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.330 21:08:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.330 21:08:09 -- setup/common.sh@18 -- # local node= 00:04:35.330 21:08:09 -- setup/common.sh@19 -- # local var val 00:04:35.330 21:08:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.330 21:08:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.330 21:08:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.330 21:08:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.330 21:08:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.330 21:08:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36683352 kB' 'MemAvailable: 41828284 kB' 'Buffers: 4096 kB' 'Cached: 17044616 kB' 'SwapCached: 0 kB' 'Active: 12887680 kB' 'Inactive: 4709516 kB' 'Active(anon): 12409324 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551752 kB' 'Mapped: 213036 kB' 'Shmem: 11860840 kB' 'KReclaimable: 604552 kB' 'Slab: 1316324 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711772 kB' 'KernelStack: 22528 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13895728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220760 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.330 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.330 21:08:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.331 21:08:09 -- setup/common.sh@33 -- # echo 0 00:04:35.331 21:08:09 -- setup/common.sh@33 -- # return 0 00:04:35.331 21:08:09 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.331 21:08:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.331 nr_hugepages=1024 00:04:35.331 21:08:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.331 resv_hugepages=0 00:04:35.331 21:08:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.331 surplus_hugepages=0 00:04:35.331 21:08:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.331 anon_hugepages=0 00:04:35.331 21:08:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.331 21:08:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.331 21:08:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.331 21:08:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.331 21:08:09 -- setup/common.sh@18 -- # local node= 00:04:35.331 21:08:09 -- setup/common.sh@19 -- # local var val 00:04:35.331 21:08:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.331 21:08:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.331 21:08:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.331 21:08:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.331 21:08:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.331 21:08:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36689300 kB' 'MemAvailable: 41834232 kB' 'Buffers: 4096 kB' 'Cached: 17044628 kB' 'SwapCached: 0 kB' 'Active: 12882540 kB' 'Inactive: 4709516 kB' 'Active(anon): 12404184 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546732 kB' 'Mapped: 213448 kB' 'Shmem: 11860852 kB' 'KReclaimable: 604552 kB' 'Slab: 1316324 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 711772 kB' 'KernelStack: 22544 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13907180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.331 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.331 21:08:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.332 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.332 21:08:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.333 21:08:09 -- setup/common.sh@33 -- # echo 1024 00:04:35.333 21:08:09 -- setup/common.sh@33 -- # return 0 00:04:35.333 21:08:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.333 21:08:09 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.333 21:08:09 -- setup/hugepages.sh@27 -- # local node 00:04:35.333 21:08:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.333 21:08:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.333 21:08:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.333 21:08:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.333 21:08:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.333 21:08:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.333 21:08:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.333 21:08:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.333 21:08:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.333 21:08:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.333 21:08:09 -- setup/common.sh@18 -- # local node=0 00:04:35.333 21:08:09 -- setup/common.sh@19 -- # local var val 00:04:35.333 21:08:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.333 21:08:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.333 21:08:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.333 21:08:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.333 21:08:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.333 21:08:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21978068 kB' 'MemUsed: 10614016 kB' 'SwapCached: 0 kB' 'Active: 6569636 kB' 'Inactive: 569080 kB' 'Active(anon): 6292324 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6972156 kB' 'Mapped: 84824 kB' 'AnonPages: 170240 kB' 'Shmem: 6125764 kB' 'KernelStack: 11720 kB' 'PageTables: 4888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389720 kB' 'Slab: 728720 kB' 'SReclaimable: 389720 kB' 'SUnreclaim: 339000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.333 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.333 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # continue 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.334 21:08:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.334 21:08:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.334 21:08:09 -- setup/common.sh@33 -- # echo 0 00:04:35.334 21:08:09 -- setup/common.sh@33 -- # return 0 00:04:35.334 21:08:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.334 21:08:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.334 21:08:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.334 21:08:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.334 21:08:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.334 node0=1024 expecting 1024 00:04:35.334 21:08:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.334 21:08:09 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:35.334 21:08:09 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:35.334 21:08:09 -- setup/hugepages.sh@202 -- # setup output 00:04:35.334 21:08:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.334 21:08:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:38.624 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:38.624 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.624 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:38.624 21:08:13 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:38.624 21:08:13 -- setup/hugepages.sh@89 -- # local node 00:04:38.624 21:08:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.624 21:08:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.624 21:08:13 -- setup/hugepages.sh@92 -- # local surp 00:04:38.624 21:08:13 -- setup/hugepages.sh@93 -- # local resv 00:04:38.624 21:08:13 -- setup/hugepages.sh@94 -- # local anon 00:04:38.624 21:08:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.624 21:08:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.624 21:08:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.624 21:08:13 -- setup/common.sh@18 -- # local node= 00:04:38.624 21:08:13 -- setup/common.sh@19 -- # local var val 00:04:38.624 21:08:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.624 21:08:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.624 21:08:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.624 21:08:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.624 21:08:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.624 21:08:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36691412 kB' 'MemAvailable: 41836344 kB' 'Buffers: 4096 kB' 'Cached: 17044720 kB' 'SwapCached: 0 kB' 'Active: 12882576 kB' 'Inactive: 4709516 kB' 'Active(anon): 12404220 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546020 kB' 'Mapped: 212620 kB' 'Shmem: 11860944 kB' 'KReclaimable: 604552 kB' 'Slab: 1316984 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712432 kB' 'KernelStack: 22480 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13889856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.624 21:08:13 -- setup/common.sh@33 -- # echo 0 00:04:38.624 21:08:13 -- setup/common.sh@33 -- # return 0 00:04:38.624 21:08:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:38.624 21:08:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.624 21:08:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.624 21:08:13 -- setup/common.sh@18 -- # local node= 00:04:38.624 21:08:13 -- setup/common.sh@19 -- # local var val 00:04:38.624 21:08:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.624 21:08:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.624 21:08:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.624 21:08:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.624 21:08:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.624 21:08:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.624 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.624 21:08:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36690944 kB' 'MemAvailable: 41835876 kB' 'Buffers: 4096 kB' 'Cached: 17044728 kB' 'SwapCached: 0 kB' 'Active: 12881736 kB' 'Inactive: 4709516 kB' 'Active(anon): 12403380 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545708 kB' 'Mapped: 212536 kB' 'Shmem: 11860952 kB' 'KReclaimable: 604552 kB' 'Slab: 1316976 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712424 kB' 'KernelStack: 22480 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13889872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.625 21:08:13 -- setup/common.sh@33 -- # echo 0 00:04:38.625 21:08:13 -- setup/common.sh@33 -- # return 0 00:04:38.625 21:08:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:38.625 21:08:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.625 21:08:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.625 21:08:13 -- setup/common.sh@18 -- # local node= 00:04:38.625 21:08:13 -- setup/common.sh@19 -- # local var val 00:04:38.625 21:08:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.625 21:08:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.625 21:08:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.625 21:08:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.625 21:08:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.625 21:08:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36690892 kB' 'MemAvailable: 41835824 kB' 'Buffers: 4096 kB' 'Cached: 17044744 kB' 'SwapCached: 0 kB' 'Active: 12881740 kB' 'Inactive: 4709516 kB' 'Active(anon): 12403384 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545704 kB' 'Mapped: 212536 kB' 'Shmem: 11860968 kB' 'KReclaimable: 604552 kB' 'Slab: 1316976 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712424 kB' 'KernelStack: 22480 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13890024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.625 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.625 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.626 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.626 21:08:13 -- setup/common.sh@33 -- # echo 0 00:04:38.626 21:08:13 -- setup/common.sh@33 -- # return 0 00:04:38.626 21:08:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:38.626 21:08:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.626 nr_hugepages=1024 00:04:38.626 21:08:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.626 resv_hugepages=0 00:04:38.626 21:08:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.626 surplus_hugepages=0 00:04:38.626 21:08:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.626 anon_hugepages=0 00:04:38.626 21:08:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.626 21:08:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.626 21:08:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.626 21:08:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.626 21:08:13 -- setup/common.sh@18 -- # local node= 00:04:38.626 21:08:13 -- setup/common.sh@19 -- # local var val 00:04:38.626 21:08:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.626 21:08:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.626 21:08:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.626 21:08:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.626 21:08:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.626 21:08:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.626 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36690896 kB' 'MemAvailable: 41835828 kB' 'Buffers: 4096 kB' 'Cached: 17044768 kB' 'SwapCached: 0 kB' 'Active: 12881908 kB' 'Inactive: 4709516 kB' 'Active(anon): 12403552 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545820 kB' 'Mapped: 212536 kB' 'Shmem: 11860992 kB' 'KReclaimable: 604552 kB' 'Slab: 1316976 kB' 'SReclaimable: 604552 kB' 'SUnreclaim: 712424 kB' 'KernelStack: 22512 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 13890404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.627 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.627 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.627 21:08:13 -- setup/common.sh@33 -- # echo 1024 00:04:38.627 21:08:13 -- setup/common.sh@33 -- # return 0 00:04:38.628 21:08:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.628 21:08:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.628 21:08:13 -- setup/hugepages.sh@27 -- # local node 00:04:38.628 21:08:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.628 21:08:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.628 21:08:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.628 21:08:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.628 21:08:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.628 21:08:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.628 21:08:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.628 21:08:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.628 21:08:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.628 21:08:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.628 21:08:13 -- setup/common.sh@18 -- # local node=0 00:04:38.628 21:08:13 -- setup/common.sh@19 -- # local var val 00:04:38.628 21:08:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.628 21:08:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.628 21:08:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.628 21:08:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.628 21:08:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.628 21:08:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21985804 kB' 'MemUsed: 10606280 kB' 'SwapCached: 0 kB' 'Active: 6563840 kB' 'Inactive: 569080 kB' 'Active(anon): 6286528 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6972168 kB' 'Mapped: 84824 kB' 'AnonPages: 163848 kB' 'Shmem: 6125776 kB' 'KernelStack: 11704 kB' 'PageTables: 4828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389720 kB' 'Slab: 729332 kB' 'SReclaimable: 389720 kB' 'SUnreclaim: 339612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # continue 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.628 21:08:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.628 21:08:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.628 21:08:13 -- setup/common.sh@33 -- # echo 0 00:04:38.628 21:08:13 -- setup/common.sh@33 -- # return 0 00:04:38.628 21:08:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.628 21:08:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.628 21:08:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.628 21:08:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.628 21:08:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.628 node0=1024 expecting 1024 00:04:38.628 21:08:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.628 00:04:38.628 real 0m7.813s 00:04:38.628 user 0m2.688s 00:04:38.628 sys 0m4.995s 00:04:38.628 21:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.628 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:38.628 ************************************ 00:04:38.628 END TEST no_shrink_alloc 00:04:38.628 ************************************ 00:04:38.628 21:08:13 -- setup/hugepages.sh@217 -- # clear_hp 00:04:38.628 21:08:13 -- setup/hugepages.sh@37 -- # local node hp 00:04:38.628 21:08:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:38.628 21:08:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.628 21:08:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:38.628 21:08:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.628 21:08:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:38.628 21:08:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:38.628 21:08:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.628 21:08:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:38.628 21:08:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.628 21:08:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:38.628 21:08:13 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:38.628 21:08:13 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:38.628 00:04:38.628 real 0m30.879s 00:04:38.628 user 0m10.259s 00:04:38.628 sys 0m18.816s 00:04:38.628 21:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.628 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:38.628 ************************************ 00:04:38.628 END TEST hugepages 00:04:38.628 ************************************ 00:04:38.628 21:08:13 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:38.628 21:08:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.628 21:08:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.628 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:38.628 ************************************ 00:04:38.628 START TEST driver 00:04:38.628 ************************************ 00:04:38.628 21:08:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:38.887 * Looking for test storage... 00:04:38.887 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:38.887 21:08:13 -- setup/driver.sh@68 -- # setup reset 00:04:38.887 21:08:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.887 21:08:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.159 21:08:18 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:44.159 21:08:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.159 21:08:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.159 21:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:44.159 ************************************ 00:04:44.159 START TEST guess_driver 00:04:44.159 ************************************ 00:04:44.159 21:08:18 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:44.159 21:08:18 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:44.159 21:08:18 -- setup/driver.sh@47 -- # local fail=0 00:04:44.159 21:08:18 -- setup/driver.sh@49 -- # pick_driver 00:04:44.159 21:08:18 -- setup/driver.sh@36 -- # vfio 00:04:44.159 21:08:18 -- setup/driver.sh@21 -- # local iommu_grups 00:04:44.159 21:08:18 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:44.159 21:08:18 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:44.159 21:08:18 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:44.159 21:08:18 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:44.159 21:08:18 -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:44.159 21:08:18 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:44.159 21:08:18 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:44.159 21:08:18 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:44.159 21:08:18 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:44.159 21:08:18 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:44.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:44.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:44.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:44.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:44.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:44.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:44.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:44.159 21:08:18 -- setup/driver.sh@30 -- # return 0 00:04:44.159 21:08:18 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:44.159 21:08:18 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:44.159 21:08:18 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:44.159 21:08:18 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:44.159 Looking for driver=vfio-pci 00:04:44.159 21:08:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.159 21:08:18 -- setup/driver.sh@45 -- # setup output config 00:04:44.159 21:08:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.159 21:08:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:48.351 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.351 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.351 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.352 21:08:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.352 21:08:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.352 21:08:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.728 21:08:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.728 21:08:24 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.728 21:08:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.728 21:08:24 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:49.728 21:08:24 -- setup/driver.sh@65 -- # setup reset 00:04:49.728 21:08:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.728 21:08:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.300 00:04:56.300 real 0m11.163s 00:04:56.300 user 0m2.853s 00:04:56.300 sys 0m5.569s 00:04:56.300 21:08:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.300 21:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:56.300 ************************************ 00:04:56.300 END TEST guess_driver 00:04:56.300 ************************************ 00:04:56.300 00:04:56.300 real 0m16.593s 00:04:56.300 user 0m4.369s 00:04:56.300 sys 0m8.682s 00:04:56.300 21:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.300 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:56.300 ************************************ 00:04:56.300 END TEST driver 00:04:56.300 ************************************ 00:04:56.300 21:08:30 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:56.300 21:08:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.300 21:08:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.300 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:56.300 ************************************ 00:04:56.300 START TEST devices 00:04:56.300 ************************************ 00:04:56.300 21:08:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:56.300 * Looking for test storage... 00:04:56.300 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:56.300 21:08:30 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:56.300 21:08:30 -- setup/devices.sh@192 -- # setup reset 00:04:56.301 21:08:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.301 21:08:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.496 21:08:34 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:00.496 21:08:34 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:00.496 21:08:34 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:00.496 21:08:34 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:00.496 21:08:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:00.496 21:08:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:00.496 21:08:34 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:00.496 21:08:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.496 21:08:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:00.496 21:08:34 -- setup/devices.sh@196 -- # blocks=() 00:05:00.496 21:08:34 -- setup/devices.sh@196 -- # declare -a blocks 00:05:00.496 21:08:34 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:00.496 21:08:34 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:00.496 21:08:34 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:00.496 21:08:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.496 21:08:34 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:00.496 21:08:34 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:00.496 21:08:34 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:00.496 21:08:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:00.496 21:08:34 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:00.496 21:08:34 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:00.496 21:08:34 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:00.496 No valid GPT data, bailing 00:05:00.496 21:08:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.496 21:08:34 -- scripts/common.sh@393 -- # pt= 00:05:00.496 21:08:34 -- scripts/common.sh@394 -- # return 1 00:05:00.496 21:08:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:00.496 21:08:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:00.496 21:08:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:00.496 21:08:34 -- setup/common.sh@80 -- # echo 2000398934016 00:05:00.496 21:08:34 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:00.496 21:08:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.496 21:08:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:00.496 21:08:34 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:00.496 21:08:34 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:00.496 21:08:34 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:00.496 21:08:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.496 21:08:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.496 21:08:34 -- common/autotest_common.sh@10 -- # set +x 00:05:00.496 ************************************ 00:05:00.496 START TEST nvme_mount 00:05:00.496 ************************************ 00:05:00.496 21:08:34 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:00.496 21:08:34 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:00.496 21:08:34 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:00.496 21:08:34 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.496 21:08:34 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.496 21:08:34 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:00.496 21:08:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.496 21:08:34 -- setup/common.sh@40 -- # local part_no=1 00:05:00.496 21:08:34 -- setup/common.sh@41 -- # local size=1073741824 00:05:00.496 21:08:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.496 21:08:34 -- setup/common.sh@44 -- # parts=() 00:05:00.496 21:08:34 -- setup/common.sh@44 -- # local parts 00:05:00.496 21:08:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.496 21:08:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.496 21:08:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.496 21:08:34 -- setup/common.sh@46 -- # (( part++ )) 00:05:00.496 21:08:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.496 21:08:34 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:00.496 21:08:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.496 21:08:34 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:01.067 Creating new GPT entries in memory. 00:05:01.067 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.067 other utilities. 00:05:01.067 21:08:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.067 21:08:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.067 21:08:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.067 21:08:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.067 21:08:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.075 Creating new GPT entries in memory. 00:05:02.075 The operation has completed successfully. 00:05:02.075 21:08:36 -- setup/common.sh@57 -- # (( part++ )) 00:05:02.075 21:08:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.075 21:08:36 -- setup/common.sh@62 -- # wait 1478882 00:05:02.075 21:08:36 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.075 21:08:36 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:02.075 21:08:36 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.075 21:08:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:02.075 21:08:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:02.075 21:08:36 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.075 21:08:36 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.075 21:08:36 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:02.075 21:08:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:02.075 21:08:36 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.075 21:08:36 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.075 21:08:36 -- setup/devices.sh@53 -- # local found=0 00:05:02.075 21:08:36 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.075 21:08:36 -- setup/devices.sh@56 -- # : 00:05:02.075 21:08:36 -- setup/devices.sh@59 -- # local pci status 00:05:02.075 21:08:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.075 21:08:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:02.075 21:08:36 -- setup/devices.sh@47 -- # setup output config 00:05:02.075 21:08:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.075 21:08:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:06.273 21:08:40 -- setup/devices.sh@63 -- # found=1 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 21:08:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.273 21:08:40 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:06.273 21:08:40 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.273 21:08:40 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.273 21:08:40 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.273 21:08:40 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:06.273 21:08:40 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.273 21:08:40 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.273 21:08:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:06.273 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.273 21:08:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.273 21:08:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:06.273 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:06.273 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:06.273 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:06.273 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:06.273 21:08:40 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:06.273 21:08:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:06.273 21:08:40 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.273 21:08:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:06.273 21:08:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:06.273 21:08:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.273 21:08:40 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.273 21:08:40 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:06.273 21:08:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:06.273 21:08:40 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.273 21:08:40 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.273 21:08:40 -- setup/devices.sh@53 -- # local found=0 00:05:06.273 21:08:40 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.274 21:08:40 -- setup/devices.sh@56 -- # : 00:05:06.274 21:08:40 -- setup/devices.sh@59 -- # local pci status 00:05:06.274 21:08:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.274 21:08:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:06.274 21:08:40 -- setup/devices.sh@47 -- # setup output config 00:05:06.274 21:08:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.274 21:08:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:10.470 21:08:44 -- setup/devices.sh@63 -- # found=1 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:10.470 21:08:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.470 21:08:45 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:10.470 21:08:45 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.470 21:08:45 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.470 21:08:45 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.470 21:08:45 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.470 21:08:45 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:10.470 21:08:45 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:10.470 21:08:45 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:10.470 21:08:45 -- setup/devices.sh@50 -- # local mount_point= 00:05:10.470 21:08:45 -- setup/devices.sh@51 -- # local test_file= 00:05:10.470 21:08:45 -- setup/devices.sh@53 -- # local found=0 00:05:10.470 21:08:45 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.470 21:08:45 -- setup/devices.sh@59 -- # local pci status 00:05:10.470 21:08:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.470 21:08:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:10.470 21:08:45 -- setup/devices.sh@47 -- # setup output config 00:05:10.470 21:08:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.470 21:08:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:14.664 21:08:48 -- setup/devices.sh@63 -- # found=1 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.664 21:08:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.664 21:08:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:14.664 21:08:48 -- setup/devices.sh@68 -- # return 0 00:05:14.664 21:08:48 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:14.664 21:08:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.664 21:08:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.664 21:08:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.664 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.664 00:05:14.664 real 0m14.224s 00:05:14.664 user 0m4.089s 00:05:14.664 sys 0m7.910s 00:05:14.664 21:08:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.664 21:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.664 ************************************ 00:05:14.664 END TEST nvme_mount 00:05:14.664 ************************************ 00:05:14.664 21:08:48 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:14.664 21:08:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.664 21:08:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.664 21:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.664 ************************************ 00:05:14.664 START TEST dm_mount 00:05:14.664 ************************************ 00:05:14.664 21:08:48 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:14.664 21:08:48 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:14.664 21:08:48 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:14.664 21:08:48 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:14.664 21:08:48 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:14.664 21:08:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:14.664 21:08:48 -- setup/common.sh@40 -- # local part_no=2 00:05:14.664 21:08:48 -- setup/common.sh@41 -- # local size=1073741824 00:05:14.664 21:08:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:14.664 21:08:48 -- setup/common.sh@44 -- # parts=() 00:05:14.664 21:08:48 -- setup/common.sh@44 -- # local parts 00:05:14.664 21:08:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:14.664 21:08:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.664 21:08:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.664 21:08:48 -- setup/common.sh@46 -- # (( part++ )) 00:05:14.664 21:08:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.664 21:08:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.664 21:08:48 -- setup/common.sh@46 -- # (( part++ )) 00:05:14.664 21:08:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.664 21:08:48 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:14.664 21:08:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:14.664 21:08:48 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:15.233 Creating new GPT entries in memory. 00:05:15.233 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:15.233 other utilities. 00:05:15.233 21:08:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:15.233 21:08:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.233 21:08:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:15.233 21:08:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:15.233 21:08:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:16.168 Creating new GPT entries in memory. 00:05:16.168 The operation has completed successfully. 00:05:16.168 21:08:51 -- setup/common.sh@57 -- # (( part++ )) 00:05:16.168 21:08:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.168 21:08:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.168 21:08:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.168 21:08:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:17.548 The operation has completed successfully. 00:05:17.548 21:08:52 -- setup/common.sh@57 -- # (( part++ )) 00:05:17.548 21:08:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.548 21:08:52 -- setup/common.sh@62 -- # wait 1484154 00:05:17.548 21:08:52 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:17.548 21:08:52 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.548 21:08:52 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.548 21:08:52 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:17.548 21:08:52 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:17.548 21:08:52 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.548 21:08:52 -- setup/devices.sh@161 -- # break 00:05:17.548 21:08:52 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.548 21:08:52 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:17.548 21:08:52 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:17.548 21:08:52 -- setup/devices.sh@166 -- # dm=dm-2 00:05:17.548 21:08:52 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:17.548 21:08:52 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:17.548 21:08:52 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.548 21:08:52 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:17.548 21:08:52 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.548 21:08:52 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.548 21:08:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:17.548 21:08:52 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.548 21:08:52 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.548 21:08:52 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:17.548 21:08:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:17.548 21:08:52 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:17.548 21:08:52 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.548 21:08:52 -- setup/devices.sh@53 -- # local found=0 00:05:17.548 21:08:52 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:17.548 21:08:52 -- setup/devices.sh@56 -- # : 00:05:17.548 21:08:52 -- setup/devices.sh@59 -- # local pci status 00:05:17.548 21:08:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.548 21:08:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:17.548 21:08:52 -- setup/devices.sh@47 -- # setup output config 00:05:17.548 21:08:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.548 21:08:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:21.743 21:08:56 -- setup/devices.sh@63 -- # found=1 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.743 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.743 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.744 21:08:56 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:21.744 21:08:56 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:21.744 21:08:56 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.744 21:08:56 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.744 21:08:56 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:21.744 21:08:56 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:21.744 21:08:56 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:21.744 21:08:56 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:21.744 21:08:56 -- setup/devices.sh@50 -- # local mount_point= 00:05:21.744 21:08:56 -- setup/devices.sh@51 -- # local test_file= 00:05:21.744 21:08:56 -- setup/devices.sh@53 -- # local found=0 00:05:21.744 21:08:56 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.744 21:08:56 -- setup/devices.sh@59 -- # local pci status 00:05:21.744 21:08:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.744 21:08:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:21.744 21:08:56 -- setup/devices.sh@47 -- # setup output config 00:05:21.744 21:08:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.744 21:08:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:25.974 21:08:59 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:08:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:25.974 21:08:59 -- setup/devices.sh@63 -- # found=1 00:05:25.974 21:08:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:08:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:08:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:08:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:08:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:08:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:08:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:08:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:08:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.974 21:09:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.974 21:09:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:25.974 21:09:00 -- setup/devices.sh@68 -- # return 0 00:05:25.974 21:09:00 -- setup/devices.sh@187 -- # cleanup_dm 00:05:25.974 21:09:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:25.974 21:09:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:25.974 21:09:00 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:25.974 21:09:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:25.974 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.974 21:09:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:25.974 00:05:25.974 real 0m11.311s 00:05:25.974 user 0m2.984s 00:05:25.974 sys 0m5.477s 00:05:25.974 21:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.974 21:09:00 -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 ************************************ 00:05:25.974 END TEST dm_mount 00:05:25.974 ************************************ 00:05:25.974 21:09:00 -- setup/devices.sh@1 -- # cleanup 00:05:25.974 21:09:00 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:25.974 21:09:00 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.974 21:09:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.974 21:09:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.974 21:09:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.975 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:25.975 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:25.975 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:25.975 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:25.975 21:09:00 -- setup/devices.sh@12 -- # cleanup_dm 00:05:25.975 21:09:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:25.975 21:09:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:25.975 21:09:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.975 21:09:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:25.975 21:09:00 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.975 21:09:00 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:25.975 00:05:25.975 real 0m30.548s 00:05:25.975 user 0m8.764s 00:05:25.975 sys 0m16.631s 00:05:25.975 21:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.975 21:09:00 -- common/autotest_common.sh@10 -- # set +x 00:05:25.975 ************************************ 00:05:25.975 END TEST devices 00:05:25.975 ************************************ 00:05:25.975 00:05:25.975 real 1m46.689s 00:05:25.975 user 0m32.241s 00:05:25.975 sys 1m1.868s 00:05:25.975 21:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.975 21:09:00 -- common/autotest_common.sh@10 -- # set +x 00:05:25.975 ************************************ 00:05:25.975 END TEST setup.sh 00:05:25.975 ************************************ 00:05:25.975 21:09:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:30.173 Hugepages 00:05:30.173 node hugesize free / total 00:05:30.173 node0 1048576kB 0 / 0 00:05:30.173 node0 2048kB 2048 / 2048 00:05:30.173 node1 1048576kB 0 / 0 00:05:30.173 node1 2048kB 0 / 0 00:05:30.173 00:05:30.173 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.173 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:30.173 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:30.173 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:30.173 21:09:04 -- spdk/autotest.sh@141 -- # uname -s 00:05:30.173 21:09:04 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:30.173 21:09:04 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:30.173 21:09:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:34.369 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:36.356 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:36.356 21:09:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:37.292 21:09:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:37.292 21:09:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:37.292 21:09:11 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:37.292 21:09:11 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:37.292 21:09:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:37.292 21:09:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:37.292 21:09:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.292 21:09:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:37.292 21:09:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:37.292 21:09:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:37.292 21:09:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:37.292 21:09:11 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.485 Waiting for block devices as requested 00:05:41.485 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:41.485 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:41.485 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:41.485 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:41.485 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:41.485 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:41.744 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:41.744 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:41.744 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:42.003 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:42.003 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:42.003 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:42.263 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:42.263 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:42.263 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:42.522 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:42.522 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:42.522 21:09:17 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:42.522 21:09:17 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:42.522 21:09:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:42.522 21:09:17 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:42.522 21:09:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:42.522 21:09:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:42.782 21:09:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:42.782 21:09:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:42.782 21:09:17 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:42.782 21:09:17 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:42.782 21:09:17 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:42.782 21:09:17 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:42.782 21:09:17 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:42.782 21:09:17 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:05:42.782 21:09:17 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:42.782 21:09:17 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:42.782 21:09:17 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:42.782 21:09:17 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:42.782 21:09:17 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:42.782 21:09:17 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:42.782 21:09:17 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:42.782 21:09:17 -- common/autotest_common.sh@1542 -- # continue 00:05:42.782 21:09:17 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:42.782 21:09:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:42.782 21:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.782 21:09:17 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:42.782 21:09:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:42.782 21:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.782 21:09:17 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:46.978 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:46.978 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:48.884 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:48.884 21:09:23 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:48.884 21:09:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:48.884 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.144 21:09:23 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:49.144 21:09:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:49.144 21:09:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:49.144 21:09:23 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:49.144 21:09:23 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:49.144 21:09:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:49.144 21:09:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:49.144 21:09:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:49.144 21:09:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:49.144 21:09:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:49.144 21:09:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:49.144 21:09:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:49.144 21:09:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:49.144 21:09:23 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:49.144 21:09:23 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:49.144 21:09:23 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:49.144 21:09:23 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:49.144 21:09:23 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:49.144 21:09:23 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:05:49.144 21:09:23 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:05:49.144 21:09:23 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1496440 00:05:49.144 21:09:23 -- common/autotest_common.sh@1583 -- # waitforlisten 1496440 00:05:49.144 21:09:23 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.144 21:09:23 -- common/autotest_common.sh@819 -- # '[' -z 1496440 ']' 00:05:49.144 21:09:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.144 21:09:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.144 21:09:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.144 21:09:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.144 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.144 [2024-07-26 21:09:23.972894] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:49.144 [2024-07-26 21:09:23.972947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496440 ] 00:05:49.144 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.403 [2024-07-26 21:09:24.058107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.403 [2024-07-26 21:09:24.096470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.403 [2024-07-26 21:09:24.096589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.970 21:09:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.970 21:09:24 -- common/autotest_common.sh@852 -- # return 0 00:05:49.970 21:09:24 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:49.970 21:09:24 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:49.970 21:09:24 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:53.262 nvme0n1 00:05:53.262 21:09:27 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:53.262 [2024-07-26 21:09:27.891083] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:53.262 request: 00:05:53.262 { 00:05:53.262 "nvme_ctrlr_name": "nvme0", 00:05:53.262 "password": "test", 00:05:53.262 "method": "bdev_nvme_opal_revert", 00:05:53.262 "req_id": 1 00:05:53.262 } 00:05:53.262 Got JSON-RPC error response 00:05:53.262 response: 00:05:53.262 { 00:05:53.262 "code": -32602, 00:05:53.262 "message": "Invalid parameters" 00:05:53.262 } 00:05:53.262 21:09:27 -- common/autotest_common.sh@1589 -- # true 00:05:53.262 21:09:27 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:53.262 21:09:27 -- common/autotest_common.sh@1593 -- # killprocess 1496440 00:05:53.262 21:09:27 -- common/autotest_common.sh@926 -- # '[' -z 1496440 ']' 00:05:53.262 21:09:27 -- common/autotest_common.sh@930 -- # kill -0 1496440 00:05:53.262 21:09:27 -- common/autotest_common.sh@931 -- # uname 00:05:53.262 21:09:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:53.262 21:09:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1496440 00:05:53.262 21:09:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:53.262 21:09:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:53.262 21:09:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1496440' 00:05:53.262 killing process with pid 1496440 00:05:53.262 21:09:27 -- common/autotest_common.sh@945 -- # kill 1496440 00:05:53.262 21:09:27 -- common/autotest_common.sh@950 -- # wait 1496440 00:05:53.262 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.262 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.262 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.262 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.263 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:53.264 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.801 21:09:30 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:55.801 21:09:30 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:55.801 21:09:30 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:55.801 21:09:30 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:55.801 21:09:30 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:55.801 21:09:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:55.801 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.801 21:09:30 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:55.801 21:09:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.801 21:09:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.801 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.801 ************************************ 00:05:55.801 START TEST env 00:05:55.801 ************************************ 00:05:55.801 21:09:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:55.801 * Looking for test storage... 00:05:55.801 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:55.801 21:09:30 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:55.801 21:09:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.801 21:09:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.801 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.801 ************************************ 00:05:55.801 START TEST env_memory 00:05:55.801 ************************************ 00:05:55.801 21:09:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:55.801 00:05:55.801 00:05:55.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.801 http://cunit.sourceforge.net/ 00:05:55.801 00:05:55.801 00:05:55.801 Suite: memory 00:05:55.801 Test: alloc and free memory map ...[2024-07-26 21:09:30.549926] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:55.801 passed 00:05:55.801 Test: mem map translation ...[2024-07-26 21:09:30.567985] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:55.801 [2024-07-26 21:09:30.568000] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:55.801 [2024-07-26 21:09:30.568034] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:55.801 [2024-07-26 21:09:30.568043] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:55.801 passed 00:05:55.801 Test: mem map registration ...[2024-07-26 21:09:30.603036] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:55.801 [2024-07-26 21:09:30.603051] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:55.801 passed 00:05:55.801 Test: mem map adjacent registrations ...passed 00:05:55.801 00:05:55.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.801 suites 1 1 n/a 0 0 00:05:55.801 tests 4 4 4 0 0 00:05:55.801 asserts 152 152 152 0 n/a 00:05:55.801 00:05:55.801 Elapsed time = 0.129 seconds 00:05:55.801 00:05:55.801 real 0m0.143s 00:05:55.801 user 0m0.131s 00:05:55.801 sys 0m0.011s 00:05:55.801 21:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.801 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.801 ************************************ 00:05:55.801 END TEST env_memory 00:05:55.801 ************************************ 00:05:56.061 21:09:30 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:56.061 21:09:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.061 21:09:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.061 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:56.061 ************************************ 00:05:56.061 START TEST env_vtophys 00:05:56.061 ************************************ 00:05:56.061 21:09:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:56.061 EAL: lib.eal log level changed from notice to debug 00:05:56.061 EAL: Detected lcore 0 as core 0 on socket 0 00:05:56.061 EAL: Detected lcore 1 as core 1 on socket 0 00:05:56.061 EAL: Detected lcore 2 as core 2 on socket 0 00:05:56.061 EAL: Detected lcore 3 as core 3 on socket 0 00:05:56.061 EAL: Detected lcore 4 as core 4 on socket 0 00:05:56.061 EAL: Detected lcore 5 as core 5 on socket 0 00:05:56.061 EAL: Detected lcore 6 as core 6 on socket 0 00:05:56.061 EAL: Detected lcore 7 as core 8 on socket 0 00:05:56.061 EAL: Detected lcore 8 as core 9 on socket 0 00:05:56.061 EAL: Detected lcore 9 as core 10 on socket 0 00:05:56.061 EAL: Detected lcore 10 as core 11 on socket 0 00:05:56.061 EAL: Detected lcore 11 as core 12 on socket 0 00:05:56.061 EAL: Detected lcore 12 as core 13 on socket 0 00:05:56.061 EAL: Detected lcore 13 as core 14 on socket 0 00:05:56.061 EAL: Detected lcore 14 as core 16 on socket 0 00:05:56.061 EAL: Detected lcore 15 as core 17 on socket 0 00:05:56.061 EAL: Detected lcore 16 as core 18 on socket 0 00:05:56.061 EAL: Detected lcore 17 as core 19 on socket 0 00:05:56.061 EAL: Detected lcore 18 as core 20 on socket 0 00:05:56.061 EAL: Detected lcore 19 as core 21 on socket 0 00:05:56.061 EAL: Detected lcore 20 as core 22 on socket 0 00:05:56.061 EAL: Detected lcore 21 as core 24 on socket 0 00:05:56.061 EAL: Detected lcore 22 as core 25 on socket 0 00:05:56.061 EAL: Detected lcore 23 as core 26 on socket 0 00:05:56.061 EAL: Detected lcore 24 as core 27 on socket 0 00:05:56.061 EAL: Detected lcore 25 as core 28 on socket 0 00:05:56.061 EAL: Detected lcore 26 as core 29 on socket 0 00:05:56.061 EAL: Detected lcore 27 as core 30 on socket 0 00:05:56.061 EAL: Detected lcore 28 as core 0 on socket 1 00:05:56.061 EAL: Detected lcore 29 as core 1 on socket 1 00:05:56.061 EAL: Detected lcore 30 as core 2 on socket 1 00:05:56.061 EAL: Detected lcore 31 as core 3 on socket 1 00:05:56.061 EAL: Detected lcore 32 as core 4 on socket 1 00:05:56.061 EAL: Detected lcore 33 as core 5 on socket 1 00:05:56.061 EAL: Detected lcore 34 as core 6 on socket 1 00:05:56.061 EAL: Detected lcore 35 as core 8 on socket 1 00:05:56.061 EAL: Detected lcore 36 as core 9 on socket 1 00:05:56.061 EAL: Detected lcore 37 as core 10 on socket 1 00:05:56.061 EAL: Detected lcore 38 as core 11 on socket 1 00:05:56.061 EAL: Detected lcore 39 as core 12 on socket 1 00:05:56.061 EAL: Detected lcore 40 as core 13 on socket 1 00:05:56.061 EAL: Detected lcore 41 as core 14 on socket 1 00:05:56.061 EAL: Detected lcore 42 as core 16 on socket 1 00:05:56.061 EAL: Detected lcore 43 as core 17 on socket 1 00:05:56.061 EAL: Detected lcore 44 as core 18 on socket 1 00:05:56.061 EAL: Detected lcore 45 as core 19 on socket 1 00:05:56.061 EAL: Detected lcore 46 as core 20 on socket 1 00:05:56.061 EAL: Detected lcore 47 as core 21 on socket 1 00:05:56.061 EAL: Detected lcore 48 as core 22 on socket 1 00:05:56.061 EAL: Detected lcore 49 as core 24 on socket 1 00:05:56.061 EAL: Detected lcore 50 as core 25 on socket 1 00:05:56.061 EAL: Detected lcore 51 as core 26 on socket 1 00:05:56.061 EAL: Detected lcore 52 as core 27 on socket 1 00:05:56.061 EAL: Detected lcore 53 as core 28 on socket 1 00:05:56.061 EAL: Detected lcore 54 as core 29 on socket 1 00:05:56.061 EAL: Detected lcore 55 as core 30 on socket 1 00:05:56.061 EAL: Detected lcore 56 as core 0 on socket 0 00:05:56.061 EAL: Detected lcore 57 as core 1 on socket 0 00:05:56.061 EAL: Detected lcore 58 as core 2 on socket 0 00:05:56.061 EAL: Detected lcore 59 as core 3 on socket 0 00:05:56.061 EAL: Detected lcore 60 as core 4 on socket 0 00:05:56.061 EAL: Detected lcore 61 as core 5 on socket 0 00:05:56.061 EAL: Detected lcore 62 as core 6 on socket 0 00:05:56.061 EAL: Detected lcore 63 as core 8 on socket 0 00:05:56.061 EAL: Detected lcore 64 as core 9 on socket 0 00:05:56.061 EAL: Detected lcore 65 as core 10 on socket 0 00:05:56.061 EAL: Detected lcore 66 as core 11 on socket 0 00:05:56.061 EAL: Detected lcore 67 as core 12 on socket 0 00:05:56.061 EAL: Detected lcore 68 as core 13 on socket 0 00:05:56.061 EAL: Detected lcore 69 as core 14 on socket 0 00:05:56.061 EAL: Detected lcore 70 as core 16 on socket 0 00:05:56.061 EAL: Detected lcore 71 as core 17 on socket 0 00:05:56.061 EAL: Detected lcore 72 as core 18 on socket 0 00:05:56.061 EAL: Detected lcore 73 as core 19 on socket 0 00:05:56.061 EAL: Detected lcore 74 as core 20 on socket 0 00:05:56.061 EAL: Detected lcore 75 as core 21 on socket 0 00:05:56.061 EAL: Detected lcore 76 as core 22 on socket 0 00:05:56.061 EAL: Detected lcore 77 as core 24 on socket 0 00:05:56.061 EAL: Detected lcore 78 as core 25 on socket 0 00:05:56.061 EAL: Detected lcore 79 as core 26 on socket 0 00:05:56.061 EAL: Detected lcore 80 as core 27 on socket 0 00:05:56.061 EAL: Detected lcore 81 as core 28 on socket 0 00:05:56.061 EAL: Detected lcore 82 as core 29 on socket 0 00:05:56.061 EAL: Detected lcore 83 as core 30 on socket 0 00:05:56.061 EAL: Detected lcore 84 as core 0 on socket 1 00:05:56.061 EAL: Detected lcore 85 as core 1 on socket 1 00:05:56.061 EAL: Detected lcore 86 as core 2 on socket 1 00:05:56.061 EAL: Detected lcore 87 as core 3 on socket 1 00:05:56.061 EAL: Detected lcore 88 as core 4 on socket 1 00:05:56.061 EAL: Detected lcore 89 as core 5 on socket 1 00:05:56.061 EAL: Detected lcore 90 as core 6 on socket 1 00:05:56.061 EAL: Detected lcore 91 as core 8 on socket 1 00:05:56.061 EAL: Detected lcore 92 as core 9 on socket 1 00:05:56.061 EAL: Detected lcore 93 as core 10 on socket 1 00:05:56.061 EAL: Detected lcore 94 as core 11 on socket 1 00:05:56.061 EAL: Detected lcore 95 as core 12 on socket 1 00:05:56.061 EAL: Detected lcore 96 as core 13 on socket 1 00:05:56.061 EAL: Detected lcore 97 as core 14 on socket 1 00:05:56.061 EAL: Detected lcore 98 as core 16 on socket 1 00:05:56.061 EAL: Detected lcore 99 as core 17 on socket 1 00:05:56.061 EAL: Detected lcore 100 as core 18 on socket 1 00:05:56.061 EAL: Detected lcore 101 as core 19 on socket 1 00:05:56.061 EAL: Detected lcore 102 as core 20 on socket 1 00:05:56.061 EAL: Detected lcore 103 as core 21 on socket 1 00:05:56.061 EAL: Detected lcore 104 as core 22 on socket 1 00:05:56.061 EAL: Detected lcore 105 as core 24 on socket 1 00:05:56.061 EAL: Detected lcore 106 as core 25 on socket 1 00:05:56.062 EAL: Detected lcore 107 as core 26 on socket 1 00:05:56.062 EAL: Detected lcore 108 as core 27 on socket 1 00:05:56.062 EAL: Detected lcore 109 as core 28 on socket 1 00:05:56.062 EAL: Detected lcore 110 as core 29 on socket 1 00:05:56.062 EAL: Detected lcore 111 as core 30 on socket 1 00:05:56.062 EAL: Maximum logical cores by configuration: 128 00:05:56.062 EAL: Detected CPU lcores: 112 00:05:56.062 EAL: Detected NUMA nodes: 2 00:05:56.062 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:56.062 EAL: Detected shared linkage of DPDK 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:56.062 EAL: Registered [vdev] bus. 00:05:56.062 EAL: bus.vdev log level changed from disabled to notice 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:56.062 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:56.062 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:56.062 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:56.062 EAL: No shared files mode enabled, IPC will be disabled 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Bus pci wants IOVA as 'DC' 00:05:56.062 EAL: Bus vdev wants IOVA as 'DC' 00:05:56.062 EAL: Buses did not request a specific IOVA mode. 00:05:56.062 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:56.062 EAL: Selected IOVA mode 'VA' 00:05:56.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.062 EAL: Probing VFIO support... 00:05:56.062 EAL: IOMMU type 1 (Type 1) is supported 00:05:56.062 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:56.062 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:56.062 EAL: VFIO support initialized 00:05:56.062 EAL: Ask a virtual area of 0x2e000 bytes 00:05:56.062 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:56.062 EAL: Setting up physically contiguous memory... 00:05:56.062 EAL: Setting maximum number of open files to 524288 00:05:56.062 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:56.062 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:56.062 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:56.062 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:56.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.062 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:56.062 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.062 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:56.062 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:56.062 EAL: Hugepages will be freed exactly as allocated. 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: TSC frequency is ~2500000 KHz 00:05:56.062 EAL: Main lcore 0 is ready (tid=7f3a5a0c0a00;cpuset=[0]) 00:05:56.062 EAL: Trying to obtain current memory policy. 00:05:56.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.062 EAL: Restoring previous memory policy: 0 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was expanded by 2MB 00:05:56.062 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:56.062 EAL: probe driver: 8086:37d2 net_i40e 00:05:56.062 EAL: Not managed by a supported kernel driver, skipped 00:05:56.062 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:56.062 EAL: probe driver: 8086:37d2 net_i40e 00:05:56.062 EAL: Not managed by a supported kernel driver, skipped 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:56.062 EAL: Mem event callback 'spdk:(nil)' registered 00:05:56.062 00:05:56.062 00:05:56.062 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.062 http://cunit.sourceforge.net/ 00:05:56.062 00:05:56.062 00:05:56.062 Suite: components_suite 00:05:56.062 Test: vtophys_malloc_test ...passed 00:05:56.062 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:56.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.062 EAL: Restoring previous memory policy: 4 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was expanded by 4MB 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was shrunk by 4MB 00:05:56.062 EAL: Trying to obtain current memory policy. 00:05:56.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.062 EAL: Restoring previous memory policy: 4 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was expanded by 6MB 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was shrunk by 6MB 00:05:56.062 EAL: Trying to obtain current memory policy. 00:05:56.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.062 EAL: Restoring previous memory policy: 4 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was expanded by 10MB 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was shrunk by 10MB 00:05:56.062 EAL: Trying to obtain current memory policy. 00:05:56.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.062 EAL: Restoring previous memory policy: 4 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was expanded by 18MB 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was shrunk by 18MB 00:05:56.062 EAL: Trying to obtain current memory policy. 00:05:56.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.062 EAL: Restoring previous memory policy: 4 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was expanded by 34MB 00:05:56.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.062 EAL: request: mp_malloc_sync 00:05:56.062 EAL: No shared files mode enabled, IPC is disabled 00:05:56.062 EAL: Heap on socket 0 was shrunk by 34MB 00:05:56.063 EAL: Trying to obtain current memory policy. 00:05:56.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.063 EAL: Restoring previous memory policy: 4 00:05:56.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.063 EAL: request: mp_malloc_sync 00:05:56.063 EAL: No shared files mode enabled, IPC is disabled 00:05:56.063 EAL: Heap on socket 0 was expanded by 66MB 00:05:56.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.063 EAL: request: mp_malloc_sync 00:05:56.063 EAL: No shared files mode enabled, IPC is disabled 00:05:56.063 EAL: Heap on socket 0 was shrunk by 66MB 00:05:56.063 EAL: Trying to obtain current memory policy. 00:05:56.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.063 EAL: Restoring previous memory policy: 4 00:05:56.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.063 EAL: request: mp_malloc_sync 00:05:56.063 EAL: No shared files mode enabled, IPC is disabled 00:05:56.063 EAL: Heap on socket 0 was expanded by 130MB 00:05:56.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.063 EAL: request: mp_malloc_sync 00:05:56.063 EAL: No shared files mode enabled, IPC is disabled 00:05:56.063 EAL: Heap on socket 0 was shrunk by 130MB 00:05:56.063 EAL: Trying to obtain current memory policy. 00:05:56.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.322 EAL: Restoring previous memory policy: 4 00:05:56.322 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.322 EAL: request: mp_malloc_sync 00:05:56.322 EAL: No shared files mode enabled, IPC is disabled 00:05:56.322 EAL: Heap on socket 0 was expanded by 258MB 00:05:56.322 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.322 EAL: request: mp_malloc_sync 00:05:56.322 EAL: No shared files mode enabled, IPC is disabled 00:05:56.322 EAL: Heap on socket 0 was shrunk by 258MB 00:05:56.322 EAL: Trying to obtain current memory policy. 00:05:56.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.322 EAL: Restoring previous memory policy: 4 00:05:56.322 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.322 EAL: request: mp_malloc_sync 00:05:56.322 EAL: No shared files mode enabled, IPC is disabled 00:05:56.322 EAL: Heap on socket 0 was expanded by 514MB 00:05:56.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.582 EAL: request: mp_malloc_sync 00:05:56.582 EAL: No shared files mode enabled, IPC is disabled 00:05:56.582 EAL: Heap on socket 0 was shrunk by 514MB 00:05:56.582 EAL: Trying to obtain current memory policy. 00:05:56.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.841 EAL: Restoring previous memory policy: 4 00:05:56.841 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.841 EAL: request: mp_malloc_sync 00:05:56.841 EAL: No shared files mode enabled, IPC is disabled 00:05:56.841 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.841 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.100 EAL: request: mp_malloc_sync 00:05:57.100 EAL: No shared files mode enabled, IPC is disabled 00:05:57.100 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:57.100 passed 00:05:57.100 00:05:57.100 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.100 suites 1 1 n/a 0 0 00:05:57.100 tests 2 2 2 0 0 00:05:57.100 asserts 497 497 497 0 n/a 00:05:57.100 00:05:57.100 Elapsed time = 0.959 seconds 00:05:57.100 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.100 EAL: request: mp_malloc_sync 00:05:57.100 EAL: No shared files mode enabled, IPC is disabled 00:05:57.100 EAL: Heap on socket 0 was shrunk by 2MB 00:05:57.100 EAL: No shared files mode enabled, IPC is disabled 00:05:57.100 EAL: No shared files mode enabled, IPC is disabled 00:05:57.100 EAL: No shared files mode enabled, IPC is disabled 00:05:57.100 00:05:57.100 real 0m1.101s 00:05:57.100 user 0m0.630s 00:05:57.100 sys 0m0.441s 00:05:57.100 21:09:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.100 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.100 ************************************ 00:05:57.100 END TEST env_vtophys 00:05:57.100 ************************************ 00:05:57.100 21:09:31 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:57.100 21:09:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.100 21:09:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.100 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.100 ************************************ 00:05:57.100 START TEST env_pci 00:05:57.100 ************************************ 00:05:57.100 21:09:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:57.100 00:05:57.100 00:05:57.100 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.100 http://cunit.sourceforge.net/ 00:05:57.100 00:05:57.100 00:05:57.100 Suite: pci 00:05:57.100 Test: pci_hook ...[2024-07-26 21:09:31.851810] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1497994 has claimed it 00:05:57.100 EAL: Cannot find device (10000:00:01.0) 00:05:57.100 EAL: Failed to attach device on primary process 00:05:57.100 passed 00:05:57.100 00:05:57.100 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.100 suites 1 1 n/a 0 0 00:05:57.100 tests 1 1 1 0 0 00:05:57.100 asserts 25 25 25 0 n/a 00:05:57.100 00:05:57.100 Elapsed time = 0.039 seconds 00:05:57.100 00:05:57.100 real 0m0.058s 00:05:57.100 user 0m0.016s 00:05:57.100 sys 0m0.042s 00:05:57.100 21:09:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.100 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.100 ************************************ 00:05:57.100 END TEST env_pci 00:05:57.100 ************************************ 00:05:57.100 21:09:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:57.100 21:09:31 -- env/env.sh@15 -- # uname 00:05:57.100 21:09:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:57.100 21:09:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:57.100 21:09:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.100 21:09:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:57.100 21:09:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.100 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.100 ************************************ 00:05:57.100 START TEST env_dpdk_post_init 00:05:57.100 ************************************ 00:05:57.100 21:09:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.359 EAL: Detected CPU lcores: 112 00:05:57.359 EAL: Detected NUMA nodes: 2 00:05:57.359 EAL: Detected shared linkage of DPDK 00:05:57.359 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.359 EAL: Selected IOVA mode 'VA' 00:05:57.359 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.359 EAL: VFIO support initialized 00:05:57.359 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.359 EAL: Using IOMMU type 1 (Type 1) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:57.359 EAL: Ignore mapping IO port bar(1) 00:05:57.359 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:57.618 EAL: Ignore mapping IO port bar(1) 00:05:57.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:57.618 EAL: Ignore mapping IO port bar(1) 00:05:57.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:57.618 EAL: Ignore mapping IO port bar(1) 00:05:57.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:57.618 EAL: Ignore mapping IO port bar(1) 00:05:57.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:57.618 EAL: Ignore mapping IO port bar(1) 00:05:57.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:58.186 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:02.377 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:02.377 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:06:02.378 Starting DPDK initialization... 00:06:02.378 Starting SPDK post initialization... 00:06:02.378 SPDK NVMe probe 00:06:02.378 Attaching to 0000:d8:00.0 00:06:02.378 Attached to 0000:d8:00.0 00:06:02.378 Cleaning up... 00:06:02.378 00:06:02.378 real 0m5.242s 00:06:02.378 user 0m3.897s 00:06:02.378 sys 0m0.406s 00:06:02.378 21:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.378 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.378 ************************************ 00:06:02.378 END TEST env_dpdk_post_init 00:06:02.378 ************************************ 00:06:02.378 21:09:37 -- env/env.sh@26 -- # uname 00:06:02.378 21:09:37 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:02.378 21:09:37 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.378 21:09:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.378 21:09:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.378 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.378 ************************************ 00:06:02.378 START TEST env_mem_callbacks 00:06:02.378 ************************************ 00:06:02.378 21:09:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.638 EAL: Detected CPU lcores: 112 00:06:02.638 EAL: Detected NUMA nodes: 2 00:06:02.638 EAL: Detected shared linkage of DPDK 00:06:02.638 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.638 EAL: Selected IOVA mode 'VA' 00:06:02.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.638 EAL: VFIO support initialized 00:06:02.638 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.638 00:06:02.638 00:06:02.638 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.638 http://cunit.sourceforge.net/ 00:06:02.638 00:06:02.638 00:06:02.638 Suite: memory 00:06:02.638 Test: test ... 00:06:02.638 register 0x200000200000 2097152 00:06:02.638 malloc 3145728 00:06:02.638 register 0x200000400000 4194304 00:06:02.638 buf 0x200000500000 len 3145728 PASSED 00:06:02.638 malloc 64 00:06:02.638 buf 0x2000004fff40 len 64 PASSED 00:06:02.638 malloc 4194304 00:06:02.638 register 0x200000800000 6291456 00:06:02.638 buf 0x200000a00000 len 4194304 PASSED 00:06:02.638 free 0x200000500000 3145728 00:06:02.638 free 0x2000004fff40 64 00:06:02.638 unregister 0x200000400000 4194304 PASSED 00:06:02.638 free 0x200000a00000 4194304 00:06:02.638 unregister 0x200000800000 6291456 PASSED 00:06:02.638 malloc 8388608 00:06:02.638 register 0x200000400000 10485760 00:06:02.638 buf 0x200000600000 len 8388608 PASSED 00:06:02.638 free 0x200000600000 8388608 00:06:02.638 unregister 0x200000400000 10485760 PASSED 00:06:02.638 passed 00:06:02.638 00:06:02.638 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.638 suites 1 1 n/a 0 0 00:06:02.638 tests 1 1 1 0 0 00:06:02.638 asserts 15 15 15 0 n/a 00:06:02.638 00:06:02.638 Elapsed time = 0.005 seconds 00:06:02.638 00:06:02.638 real 0m0.056s 00:06:02.638 user 0m0.016s 00:06:02.638 sys 0m0.040s 00:06:02.638 21:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.638 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.638 ************************************ 00:06:02.638 END TEST env_mem_callbacks 00:06:02.638 ************************************ 00:06:02.638 00:06:02.638 real 0m6.926s 00:06:02.638 user 0m4.792s 00:06:02.638 sys 0m1.212s 00:06:02.638 21:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.638 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.638 ************************************ 00:06:02.638 END TEST env 00:06:02.638 ************************************ 00:06:02.638 21:09:37 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:02.638 21:09:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.638 21:09:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.638 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.638 ************************************ 00:06:02.638 START TEST rpc 00:06:02.638 ************************************ 00:06:02.638 21:09:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:02.638 * Looking for test storage... 00:06:02.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:02.638 21:09:37 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:02.638 21:09:37 -- rpc/rpc.sh@65 -- # spdk_pid=1498970 00:06:02.638 21:09:37 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.638 21:09:37 -- rpc/rpc.sh@67 -- # waitforlisten 1498970 00:06:02.638 21:09:37 -- common/autotest_common.sh@819 -- # '[' -z 1498970 ']' 00:06:02.638 21:09:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.638 21:09:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.638 21:09:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.638 21:09:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.638 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.898 [2024-07-26 21:09:37.537167] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:02.898 [2024-07-26 21:09:37.537230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498970 ] 00:06:02.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.898 [2024-07-26 21:09:37.622818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.898 [2024-07-26 21:09:37.661169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.898 [2024-07-26 21:09:37.661276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:02.898 [2024-07-26 21:09:37.661286] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1498970' to capture a snapshot of events at runtime. 00:06:02.898 [2024-07-26 21:09:37.661295] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1498970 for offline analysis/debug. 00:06:02.898 [2024-07-26 21:09:37.661318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.468 21:09:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.468 21:09:38 -- common/autotest_common.sh@852 -- # return 0 00:06:03.468 21:09:38 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:03.468 21:09:38 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:03.468 21:09:38 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:03.468 21:09:38 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:03.468 21:09:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.468 21:09:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.468 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.468 ************************************ 00:06:03.468 START TEST rpc_integrity 00:06:03.468 ************************************ 00:06:03.468 21:09:38 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:03.468 21:09:38 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:03.468 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.468 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:03.728 21:09:38 -- rpc/rpc.sh@13 -- # jq length 00:06:03.728 21:09:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:03.728 21:09:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.728 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.728 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:03.728 21:09:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:03.728 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.728 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.728 { 00:06:03.728 "name": "Malloc0", 00:06:03.728 "aliases": [ 00:06:03.728 "726683e4-a0aa-4e42-98d4-56debbef184f" 00:06:03.728 ], 00:06:03.728 "product_name": "Malloc disk", 00:06:03.728 "block_size": 512, 00:06:03.728 "num_blocks": 16384, 00:06:03.728 "uuid": "726683e4-a0aa-4e42-98d4-56debbef184f", 00:06:03.728 "assigned_rate_limits": { 00:06:03.728 "rw_ios_per_sec": 0, 00:06:03.728 "rw_mbytes_per_sec": 0, 00:06:03.728 "r_mbytes_per_sec": 0, 00:06:03.728 "w_mbytes_per_sec": 0 00:06:03.728 }, 00:06:03.728 "claimed": false, 00:06:03.728 "zoned": false, 00:06:03.728 "supported_io_types": { 00:06:03.728 "read": true, 00:06:03.728 "write": true, 00:06:03.728 "unmap": true, 00:06:03.728 "write_zeroes": true, 00:06:03.728 "flush": true, 00:06:03.728 "reset": true, 00:06:03.728 "compare": false, 00:06:03.728 "compare_and_write": false, 00:06:03.728 "abort": true, 00:06:03.728 "nvme_admin": false, 00:06:03.728 "nvme_io": false 00:06:03.728 }, 00:06:03.728 "memory_domains": [ 00:06:03.728 { 00:06:03.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.728 "dma_device_type": 2 00:06:03.728 } 00:06:03.728 ], 00:06:03.728 "driver_specific": {} 00:06:03.728 } 00:06:03.728 ]' 00:06:03.728 21:09:38 -- rpc/rpc.sh@17 -- # jq length 00:06:03.728 21:09:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.728 21:09:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:03.728 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.728 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 [2024-07-26 21:09:38.468079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:03.728 [2024-07-26 21:09:38.468115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.728 [2024-07-26 21:09:38.468129] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b9b00 00:06:03.728 [2024-07-26 21:09:38.468138] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.728 [2024-07-26 21:09:38.469205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.728 [2024-07-26 21:09:38.469229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.728 Passthru0 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.728 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.728 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:03.728 { 00:06:03.728 "name": "Malloc0", 00:06:03.728 "aliases": [ 00:06:03.728 "726683e4-a0aa-4e42-98d4-56debbef184f" 00:06:03.728 ], 00:06:03.728 "product_name": "Malloc disk", 00:06:03.728 "block_size": 512, 00:06:03.728 "num_blocks": 16384, 00:06:03.728 "uuid": "726683e4-a0aa-4e42-98d4-56debbef184f", 00:06:03.728 "assigned_rate_limits": { 00:06:03.728 "rw_ios_per_sec": 0, 00:06:03.728 "rw_mbytes_per_sec": 0, 00:06:03.728 "r_mbytes_per_sec": 0, 00:06:03.728 "w_mbytes_per_sec": 0 00:06:03.728 }, 00:06:03.728 "claimed": true, 00:06:03.728 "claim_type": "exclusive_write", 00:06:03.728 "zoned": false, 00:06:03.728 "supported_io_types": { 00:06:03.728 "read": true, 00:06:03.728 "write": true, 00:06:03.728 "unmap": true, 00:06:03.728 "write_zeroes": true, 00:06:03.728 "flush": true, 00:06:03.728 "reset": true, 00:06:03.728 "compare": false, 00:06:03.728 "compare_and_write": false, 00:06:03.728 "abort": true, 00:06:03.728 "nvme_admin": false, 00:06:03.728 "nvme_io": false 00:06:03.728 }, 00:06:03.728 "memory_domains": [ 00:06:03.728 { 00:06:03.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.728 "dma_device_type": 2 00:06:03.728 } 00:06:03.728 ], 00:06:03.728 "driver_specific": {} 00:06:03.728 }, 00:06:03.728 { 00:06:03.728 "name": "Passthru0", 00:06:03.728 "aliases": [ 00:06:03.728 "c87514e1-b144-5db4-a8e1-c94894aace39" 00:06:03.728 ], 00:06:03.728 "product_name": "passthru", 00:06:03.728 "block_size": 512, 00:06:03.728 "num_blocks": 16384, 00:06:03.728 "uuid": "c87514e1-b144-5db4-a8e1-c94894aace39", 00:06:03.728 "assigned_rate_limits": { 00:06:03.728 "rw_ios_per_sec": 0, 00:06:03.728 "rw_mbytes_per_sec": 0, 00:06:03.728 "r_mbytes_per_sec": 0, 00:06:03.728 "w_mbytes_per_sec": 0 00:06:03.728 }, 00:06:03.728 "claimed": false, 00:06:03.728 "zoned": false, 00:06:03.728 "supported_io_types": { 00:06:03.728 "read": true, 00:06:03.728 "write": true, 00:06:03.728 "unmap": true, 00:06:03.728 "write_zeroes": true, 00:06:03.728 "flush": true, 00:06:03.728 "reset": true, 00:06:03.728 "compare": false, 00:06:03.728 "compare_and_write": false, 00:06:03.728 "abort": true, 00:06:03.728 "nvme_admin": false, 00:06:03.728 "nvme_io": false 00:06:03.728 }, 00:06:03.728 "memory_domains": [ 00:06:03.728 { 00:06:03.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.728 "dma_device_type": 2 00:06:03.728 } 00:06:03.728 ], 00:06:03.728 "driver_specific": { 00:06:03.728 "passthru": { 00:06:03.728 "name": "Passthru0", 00:06:03.728 "base_bdev_name": "Malloc0" 00:06:03.728 } 00:06:03.728 } 00:06:03.728 } 00:06:03.728 ]' 00:06:03.728 21:09:38 -- rpc/rpc.sh@21 -- # jq length 00:06:03.728 21:09:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:03.728 21:09:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:03.728 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.728 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:03.728 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.728 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:03.728 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.728 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.728 21:09:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:03.728 21:09:38 -- rpc/rpc.sh@26 -- # jq length 00:06:03.988 21:09:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:03.988 00:06:03.988 real 0m0.279s 00:06:03.988 user 0m0.169s 00:06:03.988 sys 0m0.046s 00:06:03.988 21:09:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.988 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.988 ************************************ 00:06:03.988 END TEST rpc_integrity 00:06:03.988 ************************************ 00:06:03.988 21:09:38 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:03.988 21:09:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.988 21:09:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.988 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.988 ************************************ 00:06:03.988 START TEST rpc_plugins 00:06:03.988 ************************************ 00:06:03.989 21:09:38 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:06:03.989 21:09:38 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:03.989 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.989 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.989 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.989 21:09:38 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:03.989 21:09:38 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:03.989 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.989 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.989 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.989 21:09:38 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:03.989 { 00:06:03.989 "name": "Malloc1", 00:06:03.989 "aliases": [ 00:06:03.989 "51da3bda-fee1-43db-8807-8c7cdb7aaee2" 00:06:03.989 ], 00:06:03.989 "product_name": "Malloc disk", 00:06:03.989 "block_size": 4096, 00:06:03.989 "num_blocks": 256, 00:06:03.989 "uuid": "51da3bda-fee1-43db-8807-8c7cdb7aaee2", 00:06:03.989 "assigned_rate_limits": { 00:06:03.989 "rw_ios_per_sec": 0, 00:06:03.989 "rw_mbytes_per_sec": 0, 00:06:03.989 "r_mbytes_per_sec": 0, 00:06:03.989 "w_mbytes_per_sec": 0 00:06:03.989 }, 00:06:03.989 "claimed": false, 00:06:03.989 "zoned": false, 00:06:03.989 "supported_io_types": { 00:06:03.989 "read": true, 00:06:03.989 "write": true, 00:06:03.989 "unmap": true, 00:06:03.989 "write_zeroes": true, 00:06:03.989 "flush": true, 00:06:03.989 "reset": true, 00:06:03.989 "compare": false, 00:06:03.989 "compare_and_write": false, 00:06:03.989 "abort": true, 00:06:03.989 "nvme_admin": false, 00:06:03.989 "nvme_io": false 00:06:03.989 }, 00:06:03.989 "memory_domains": [ 00:06:03.989 { 00:06:03.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.989 "dma_device_type": 2 00:06:03.989 } 00:06:03.989 ], 00:06:03.989 "driver_specific": {} 00:06:03.989 } 00:06:03.989 ]' 00:06:03.989 21:09:38 -- rpc/rpc.sh@32 -- # jq length 00:06:03.989 21:09:38 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:03.989 21:09:38 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:03.989 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.989 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.989 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.989 21:09:38 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:03.989 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.989 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.989 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.989 21:09:38 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:03.989 21:09:38 -- rpc/rpc.sh@36 -- # jq length 00:06:03.989 21:09:38 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:03.989 00:06:03.989 real 0m0.144s 00:06:03.989 user 0m0.092s 00:06:03.989 sys 0m0.020s 00:06:03.989 21:09:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.989 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.989 ************************************ 00:06:03.989 END TEST rpc_plugins 00:06:03.989 ************************************ 00:06:03.989 21:09:38 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:03.989 21:09:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.989 21:09:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.989 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:03.989 ************************************ 00:06:03.989 START TEST rpc_trace_cmd_test 00:06:03.989 ************************************ 00:06:03.989 21:09:38 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:06:03.989 21:09:38 -- rpc/rpc.sh@40 -- # local info 00:06:03.989 21:09:38 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:03.989 21:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.989 21:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.249 21:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.249 21:09:38 -- rpc/rpc.sh@42 -- # info='{ 00:06:04.249 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1498970", 00:06:04.249 "tpoint_group_mask": "0x8", 00:06:04.249 "iscsi_conn": { 00:06:04.249 "mask": "0x2", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "scsi": { 00:06:04.249 "mask": "0x4", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "bdev": { 00:06:04.249 "mask": "0x8", 00:06:04.249 "tpoint_mask": "0xffffffffffffffff" 00:06:04.249 }, 00:06:04.249 "nvmf_rdma": { 00:06:04.249 "mask": "0x10", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "nvmf_tcp": { 00:06:04.249 "mask": "0x20", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "ftl": { 00:06:04.249 "mask": "0x40", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "blobfs": { 00:06:04.249 "mask": "0x80", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "dsa": { 00:06:04.249 "mask": "0x200", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "thread": { 00:06:04.249 "mask": "0x400", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "nvme_pcie": { 00:06:04.249 "mask": "0x800", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "iaa": { 00:06:04.249 "mask": "0x1000", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "nvme_tcp": { 00:06:04.249 "mask": "0x2000", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 }, 00:06:04.249 "bdev_nvme": { 00:06:04.249 "mask": "0x4000", 00:06:04.249 "tpoint_mask": "0x0" 00:06:04.249 } 00:06:04.249 }' 00:06:04.249 21:09:38 -- rpc/rpc.sh@43 -- # jq length 00:06:04.249 21:09:38 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:04.249 21:09:38 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:04.249 21:09:38 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:04.249 21:09:38 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:04.249 21:09:39 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:04.249 21:09:39 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:04.249 21:09:39 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:04.249 21:09:39 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:04.249 21:09:39 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:04.249 00:06:04.249 real 0m0.221s 00:06:04.249 user 0m0.178s 00:06:04.249 sys 0m0.033s 00:06:04.249 21:09:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.249 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.249 ************************************ 00:06:04.249 END TEST rpc_trace_cmd_test 00:06:04.249 ************************************ 00:06:04.249 21:09:39 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:04.249 21:09:39 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:04.249 21:09:39 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:04.249 21:09:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.249 21:09:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.249 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.249 ************************************ 00:06:04.249 START TEST rpc_daemon_integrity 00:06:04.249 ************************************ 00:06:04.510 21:09:39 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:04.510 21:09:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.510 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.510 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.510 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.510 21:09:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.510 21:09:39 -- rpc/rpc.sh@13 -- # jq length 00:06:04.510 21:09:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.510 21:09:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.510 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.510 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.510 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.510 21:09:39 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:04.510 21:09:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.510 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.510 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.510 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.510 21:09:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.510 { 00:06:04.510 "name": "Malloc2", 00:06:04.510 "aliases": [ 00:06:04.510 "097740d1-828d-4748-b955-6b8b93974834" 00:06:04.510 ], 00:06:04.510 "product_name": "Malloc disk", 00:06:04.510 "block_size": 512, 00:06:04.510 "num_blocks": 16384, 00:06:04.510 "uuid": "097740d1-828d-4748-b955-6b8b93974834", 00:06:04.510 "assigned_rate_limits": { 00:06:04.510 "rw_ios_per_sec": 0, 00:06:04.510 "rw_mbytes_per_sec": 0, 00:06:04.510 "r_mbytes_per_sec": 0, 00:06:04.510 "w_mbytes_per_sec": 0 00:06:04.510 }, 00:06:04.510 "claimed": false, 00:06:04.510 "zoned": false, 00:06:04.510 "supported_io_types": { 00:06:04.510 "read": true, 00:06:04.510 "write": true, 00:06:04.510 "unmap": true, 00:06:04.510 "write_zeroes": true, 00:06:04.510 "flush": true, 00:06:04.510 "reset": true, 00:06:04.510 "compare": false, 00:06:04.510 "compare_and_write": false, 00:06:04.510 "abort": true, 00:06:04.510 "nvme_admin": false, 00:06:04.510 "nvme_io": false 00:06:04.510 }, 00:06:04.510 "memory_domains": [ 00:06:04.510 { 00:06:04.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.510 "dma_device_type": 2 00:06:04.510 } 00:06:04.510 ], 00:06:04.510 "driver_specific": {} 00:06:04.510 } 00:06:04.510 ]' 00:06:04.510 21:09:39 -- rpc/rpc.sh@17 -- # jq length 00:06:04.510 21:09:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.510 21:09:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:04.510 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.510 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.510 [2024-07-26 21:09:39.258233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:04.510 [2024-07-26 21:09:39.258262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.510 [2024-07-26 21:09:39.258275] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b1690 00:06:04.510 [2024-07-26 21:09:39.258284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.510 [2024-07-26 21:09:39.259179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.510 [2024-07-26 21:09:39.259200] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.510 Passthru0 00:06:04.510 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.510 21:09:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.510 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.510 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.510 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.510 21:09:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.510 { 00:06:04.510 "name": "Malloc2", 00:06:04.510 "aliases": [ 00:06:04.510 "097740d1-828d-4748-b955-6b8b93974834" 00:06:04.510 ], 00:06:04.510 "product_name": "Malloc disk", 00:06:04.510 "block_size": 512, 00:06:04.510 "num_blocks": 16384, 00:06:04.510 "uuid": "097740d1-828d-4748-b955-6b8b93974834", 00:06:04.510 "assigned_rate_limits": { 00:06:04.510 "rw_ios_per_sec": 0, 00:06:04.510 "rw_mbytes_per_sec": 0, 00:06:04.510 "r_mbytes_per_sec": 0, 00:06:04.510 "w_mbytes_per_sec": 0 00:06:04.510 }, 00:06:04.510 "claimed": true, 00:06:04.510 "claim_type": "exclusive_write", 00:06:04.510 "zoned": false, 00:06:04.511 "supported_io_types": { 00:06:04.511 "read": true, 00:06:04.511 "write": true, 00:06:04.511 "unmap": true, 00:06:04.511 "write_zeroes": true, 00:06:04.511 "flush": true, 00:06:04.511 "reset": true, 00:06:04.511 "compare": false, 00:06:04.511 "compare_and_write": false, 00:06:04.511 "abort": true, 00:06:04.511 "nvme_admin": false, 00:06:04.511 "nvme_io": false 00:06:04.511 }, 00:06:04.511 "memory_domains": [ 00:06:04.511 { 00:06:04.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.511 "dma_device_type": 2 00:06:04.511 } 00:06:04.511 ], 00:06:04.511 "driver_specific": {} 00:06:04.511 }, 00:06:04.511 { 00:06:04.511 "name": "Passthru0", 00:06:04.511 "aliases": [ 00:06:04.511 "f484897f-bb6b-5a87-b3b9-ebb9d429de5f" 00:06:04.511 ], 00:06:04.511 "product_name": "passthru", 00:06:04.511 "block_size": 512, 00:06:04.511 "num_blocks": 16384, 00:06:04.511 "uuid": "f484897f-bb6b-5a87-b3b9-ebb9d429de5f", 00:06:04.511 "assigned_rate_limits": { 00:06:04.511 "rw_ios_per_sec": 0, 00:06:04.511 "rw_mbytes_per_sec": 0, 00:06:04.511 "r_mbytes_per_sec": 0, 00:06:04.511 "w_mbytes_per_sec": 0 00:06:04.511 }, 00:06:04.511 "claimed": false, 00:06:04.511 "zoned": false, 00:06:04.511 "supported_io_types": { 00:06:04.511 "read": true, 00:06:04.511 "write": true, 00:06:04.511 "unmap": true, 00:06:04.511 "write_zeroes": true, 00:06:04.511 "flush": true, 00:06:04.511 "reset": true, 00:06:04.511 "compare": false, 00:06:04.511 "compare_and_write": false, 00:06:04.511 "abort": true, 00:06:04.511 "nvme_admin": false, 00:06:04.511 "nvme_io": false 00:06:04.511 }, 00:06:04.511 "memory_domains": [ 00:06:04.511 { 00:06:04.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.511 "dma_device_type": 2 00:06:04.511 } 00:06:04.511 ], 00:06:04.511 "driver_specific": { 00:06:04.511 "passthru": { 00:06:04.511 "name": "Passthru0", 00:06:04.511 "base_bdev_name": "Malloc2" 00:06:04.511 } 00:06:04.511 } 00:06:04.511 } 00:06:04.511 ]' 00:06:04.511 21:09:39 -- rpc/rpc.sh@21 -- # jq length 00:06:04.511 21:09:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.511 21:09:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.511 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.511 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.511 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.511 21:09:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:04.511 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.511 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.511 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.511 21:09:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.511 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.511 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.511 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.511 21:09:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.511 21:09:39 -- rpc/rpc.sh@26 -- # jq length 00:06:04.771 21:09:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.771 00:06:04.771 real 0m0.281s 00:06:04.771 user 0m0.178s 00:06:04.771 sys 0m0.040s 00:06:04.771 21:09:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.771 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:04.771 ************************************ 00:06:04.771 END TEST rpc_daemon_integrity 00:06:04.771 ************************************ 00:06:04.771 21:09:39 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:04.771 21:09:39 -- rpc/rpc.sh@84 -- # killprocess 1498970 00:06:04.771 21:09:39 -- common/autotest_common.sh@926 -- # '[' -z 1498970 ']' 00:06:04.771 21:09:39 -- common/autotest_common.sh@930 -- # kill -0 1498970 00:06:04.771 21:09:39 -- common/autotest_common.sh@931 -- # uname 00:06:04.771 21:09:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.771 21:09:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1498970 00:06:04.771 21:09:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:04.771 21:09:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:04.771 21:09:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1498970' 00:06:04.771 killing process with pid 1498970 00:06:04.771 21:09:39 -- common/autotest_common.sh@945 -- # kill 1498970 00:06:04.771 21:09:39 -- common/autotest_common.sh@950 -- # wait 1498970 00:06:05.031 00:06:05.031 real 0m2.382s 00:06:05.031 user 0m3.002s 00:06:05.031 sys 0m0.707s 00:06:05.031 21:09:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.031 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.031 ************************************ 00:06:05.031 END TEST rpc 00:06:05.031 ************************************ 00:06:05.031 21:09:39 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:05.031 21:09:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.031 21:09:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.031 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.031 ************************************ 00:06:05.031 START TEST rpc_client 00:06:05.031 ************************************ 00:06:05.031 21:09:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:05.291 * Looking for test storage... 00:06:05.291 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:05.291 21:09:39 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:05.291 OK 00:06:05.291 21:09:39 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.291 00:06:05.291 real 0m0.126s 00:06:05.291 user 0m0.048s 00:06:05.291 sys 0m0.087s 00:06:05.291 21:09:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.291 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.291 ************************************ 00:06:05.291 END TEST rpc_client 00:06:05.291 ************************************ 00:06:05.291 21:09:39 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:05.291 21:09:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.291 21:09:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.291 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.291 ************************************ 00:06:05.291 START TEST json_config 00:06:05.291 ************************************ 00:06:05.291 21:09:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:05.291 21:09:40 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.291 21:09:40 -- nvmf/common.sh@7 -- # uname -s 00:06:05.291 21:09:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.291 21:09:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.291 21:09:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.291 21:09:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.291 21:09:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.291 21:09:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.291 21:09:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.291 21:09:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.291 21:09:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.291 21:09:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.291 21:09:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:05.291 21:09:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:05.291 21:09:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.291 21:09:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.291 21:09:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.291 21:09:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:05.291 21:09:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.291 21:09:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.291 21:09:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.291 21:09:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.291 21:09:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.291 21:09:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.291 21:09:40 -- paths/export.sh@5 -- # export PATH 00:06:05.291 21:09:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.291 21:09:40 -- nvmf/common.sh@46 -- # : 0 00:06:05.291 21:09:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:05.291 21:09:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:05.292 21:09:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:05.292 21:09:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.292 21:09:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.292 21:09:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:05.292 21:09:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:05.292 21:09:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:05.292 21:09:40 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:05.292 21:09:40 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:05.292 21:09:40 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:05.292 21:09:40 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:05.292 21:09:40 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:05.292 21:09:40 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:05.292 21:09:40 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:05.292 21:09:40 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:05.292 21:09:40 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:05.292 21:09:40 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:05.292 21:09:40 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:05.292 21:09:40 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:05.292 21:09:40 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:05.292 21:09:40 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.292 21:09:40 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:05.292 INFO: JSON configuration test init 00:06:05.292 21:09:40 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:05.292 21:09:40 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:05.292 21:09:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:05.292 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.292 21:09:40 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:05.292 21:09:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:05.292 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.292 21:09:40 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:05.292 21:09:40 -- json_config/json_config.sh@98 -- # local app=target 00:06:05.292 21:09:40 -- json_config/json_config.sh@99 -- # shift 00:06:05.292 21:09:40 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:05.292 21:09:40 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:05.292 21:09:40 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:05.292 21:09:40 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:05.292 21:09:40 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:05.292 21:09:40 -- json_config/json_config.sh@111 -- # app_pid[$app]=1499671 00:06:05.292 21:09:40 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:05.292 Waiting for target to run... 00:06:05.292 21:09:40 -- json_config/json_config.sh@114 -- # waitforlisten 1499671 /var/tmp/spdk_tgt.sock 00:06:05.292 21:09:40 -- common/autotest_common.sh@819 -- # '[' -z 1499671 ']' 00:06:05.292 21:09:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.292 21:09:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.292 21:09:40 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:05.292 21:09:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.292 21:09:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.292 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.292 [2024-07-26 21:09:40.143893] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:05.292 [2024-07-26 21:09:40.143948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499671 ] 00:06:05.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.812 [2024-07-26 21:09:40.451253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.812 [2024-07-26 21:09:40.471144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.812 [2024-07-26 21:09:40.471241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.072 21:09:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.072 21:09:40 -- common/autotest_common.sh@852 -- # return 0 00:06:06.072 21:09:40 -- json_config/json_config.sh@115 -- # echo '' 00:06:06.072 00:06:06.072 21:09:40 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:06.072 21:09:40 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:06.072 21:09:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:06.072 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.072 21:09:40 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:06.072 21:09:40 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:06.072 21:09:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:06.072 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.334 21:09:40 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:06.334 21:09:40 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:06.334 21:09:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:09.693 21:09:44 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:09.693 21:09:44 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:09.693 21:09:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:09.693 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:09.693 21:09:44 -- json_config/json_config.sh@48 -- # local ret=0 00:06:09.693 21:09:44 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:09.693 21:09:44 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:09.693 21:09:44 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:09.693 21:09:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:09.693 21:09:44 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:09.693 21:09:44 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:09.693 21:09:44 -- json_config/json_config.sh@51 -- # local get_types 00:06:09.693 21:09:44 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:09.693 21:09:44 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:09.693 21:09:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:09.693 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:09.693 21:09:44 -- json_config/json_config.sh@58 -- # return 0 00:06:09.693 21:09:44 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:09.693 21:09:44 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:09.693 21:09:44 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:09.693 21:09:44 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:09.693 21:09:44 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:09.693 21:09:44 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:09.693 21:09:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:09.693 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:09.693 21:09:44 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:09.693 21:09:44 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:06:09.693 21:09:44 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:06:09.693 21:09:44 -- json_config/json_config.sh@287 -- # nvmftestinit 00:06:09.693 21:09:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:06:09.693 21:09:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.693 21:09:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:09.693 21:09:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:09.693 21:09:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:09.693 21:09:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.693 21:09:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:09.693 21:09:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.693 21:09:44 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:06:09.693 21:09:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:09.693 21:09:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:09.693 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:17.818 21:09:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:17.818 21:09:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:17.818 21:09:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:17.818 21:09:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:17.818 21:09:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:17.818 21:09:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:17.818 21:09:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:17.818 21:09:52 -- nvmf/common.sh@294 -- # net_devs=() 00:06:17.818 21:09:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:17.818 21:09:52 -- nvmf/common.sh@295 -- # e810=() 00:06:17.818 21:09:52 -- nvmf/common.sh@295 -- # local -ga e810 00:06:17.818 21:09:52 -- nvmf/common.sh@296 -- # x722=() 00:06:17.818 21:09:52 -- nvmf/common.sh@296 -- # local -ga x722 00:06:17.818 21:09:52 -- nvmf/common.sh@297 -- # mlx=() 00:06:17.818 21:09:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:17.818 21:09:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.818 21:09:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:17.818 21:09:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:06:17.818 21:09:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:06:17.818 21:09:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:06:17.818 21:09:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:17.818 21:09:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:17.818 21:09:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:17.818 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:17.818 21:09:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.818 21:09:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:17.818 21:09:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:17.818 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:17.818 21:09:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:17.818 21:09:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.819 21:09:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:17.819 21:09:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:17.819 21:09:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.819 21:09:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:17.819 21:09:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.819 21:09:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:17.819 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:17.819 21:09:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.819 21:09:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:17.819 21:09:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.819 21:09:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:17.819 21:09:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.819 21:09:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:17.819 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:17.819 21:09:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.819 21:09:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:17.819 21:09:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:17.819 21:09:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:06:17.819 21:09:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:06:17.819 21:09:52 -- nvmf/common.sh@57 -- # uname 00:06:17.819 21:09:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:06:17.819 21:09:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:06:17.819 21:09:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:06:17.819 21:09:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:06:17.819 21:09:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:06:17.819 21:09:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:06:17.819 21:09:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:06:17.819 21:09:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:06:17.819 21:09:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:06:17.819 21:09:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:17.819 21:09:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:06:17.819 21:09:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:17.819 21:09:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:17.819 21:09:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:17.819 21:09:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:17.819 21:09:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:17.819 21:09:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:17.819 21:09:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.819 21:09:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:17.819 21:09:52 -- nvmf/common.sh@104 -- # continue 2 00:06:17.819 21:09:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:17.819 21:09:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.819 21:09:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.819 21:09:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:17.819 21:09:52 -- nvmf/common.sh@104 -- # continue 2 00:06:17.819 21:09:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:17.819 21:09:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:06:17.819 21:09:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:17.819 21:09:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:17.819 21:09:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:17.819 21:09:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:17.819 21:09:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:06:17.819 21:09:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:06:17.819 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:17.819 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:17.819 altname enp217s0f0np0 00:06:17.819 altname ens818f0np0 00:06:17.819 inet 192.168.100.8/24 scope global mlx_0_0 00:06:17.819 valid_lft forever preferred_lft forever 00:06:17.819 21:09:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:17.819 21:09:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:06:17.819 21:09:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:17.819 21:09:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:17.819 21:09:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:17.819 21:09:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:17.819 21:09:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:06:17.819 21:09:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:06:17.819 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:17.819 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:17.819 altname enp217s0f1np1 00:06:17.819 altname ens818f1np1 00:06:17.819 inet 192.168.100.9/24 scope global mlx_0_1 00:06:17.819 valid_lft forever preferred_lft forever 00:06:17.819 21:09:52 -- nvmf/common.sh@410 -- # return 0 00:06:17.819 21:09:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:17.819 21:09:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:17.819 21:09:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:06:17.819 21:09:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:06:17.819 21:09:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:06:17.819 21:09:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:17.819 21:09:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:17.819 21:09:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:17.819 21:09:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:18.079 21:09:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:18.079 21:09:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:18.079 21:09:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:18.079 21:09:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:18.079 21:09:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:18.079 21:09:52 -- nvmf/common.sh@104 -- # continue 2 00:06:18.079 21:09:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:18.079 21:09:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:18.079 21:09:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:18.079 21:09:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:18.079 21:09:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:18.079 21:09:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:18.079 21:09:52 -- nvmf/common.sh@104 -- # continue 2 00:06:18.079 21:09:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:18.079 21:09:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:06:18.079 21:09:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:18.079 21:09:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:18.079 21:09:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:18.079 21:09:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:18.079 21:09:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:18.079 21:09:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:06:18.079 21:09:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:18.079 21:09:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:18.079 21:09:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:18.079 21:09:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:18.079 21:09:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:06:18.079 192.168.100.9' 00:06:18.079 21:09:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:06:18.079 192.168.100.9' 00:06:18.079 21:09:52 -- nvmf/common.sh@445 -- # head -n 1 00:06:18.079 21:09:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:18.079 21:09:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:18.079 192.168.100.9' 00:06:18.079 21:09:52 -- nvmf/common.sh@446 -- # tail -n +2 00:06:18.079 21:09:52 -- nvmf/common.sh@446 -- # head -n 1 00:06:18.079 21:09:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:18.079 21:09:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:06:18.079 21:09:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:18.079 21:09:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:06:18.079 21:09:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:06:18.079 21:09:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:06:18.079 21:09:52 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:06:18.079 21:09:52 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.079 21:09:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.079 MallocForNvmf0 00:06:18.338 21:09:52 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.338 21:09:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.338 MallocForNvmf1 00:06:18.338 21:09:53 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:18.338 21:09:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:18.597 [2024-07-26 21:09:53.282616] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:18.597 [2024-07-26 21:09:53.312823] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1874b60/0x189d480) succeed. 00:06:18.597 [2024-07-26 21:09:53.326774] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1876d00/0x17fd380) succeed. 00:06:18.597 21:09:53 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.597 21:09:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.855 21:09:53 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.855 21:09:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.855 21:09:53 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.855 21:09:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.114 21:09:53 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:19.114 21:09:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:19.374 [2024-07-26 21:09:54.010502] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:19.374 21:09:54 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:19.374 21:09:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:19.374 21:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:19.374 21:09:54 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:19.374 21:09:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:19.374 21:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:19.374 21:09:54 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:19.374 21:09:54 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.374 21:09:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.634 MallocBdevForConfigChangeCheck 00:06:19.634 21:09:54 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:19.634 21:09:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:19.634 21:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:19.634 21:09:54 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:19.634 21:09:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.893 21:09:54 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:19.893 INFO: shutting down applications... 00:06:19.893 21:09:54 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:19.893 21:09:54 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:19.893 21:09:54 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:19.893 21:09:54 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:22.430 Calling clear_iscsi_subsystem 00:06:22.430 Calling clear_nvmf_subsystem 00:06:22.430 Calling clear_nbd_subsystem 00:06:22.430 Calling clear_ublk_subsystem 00:06:22.430 Calling clear_vhost_blk_subsystem 00:06:22.430 Calling clear_vhost_scsi_subsystem 00:06:22.430 Calling clear_scheduler_subsystem 00:06:22.430 Calling clear_bdev_subsystem 00:06:22.430 Calling clear_accel_subsystem 00:06:22.430 Calling clear_vmd_subsystem 00:06:22.430 Calling clear_sock_subsystem 00:06:22.430 Calling clear_iobuf_subsystem 00:06:22.430 21:09:57 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:22.430 21:09:57 -- json_config/json_config.sh@396 -- # count=100 00:06:22.430 21:09:57 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:22.430 21:09:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.430 21:09:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:22.430 21:09:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:22.689 21:09:57 -- json_config/json_config.sh@398 -- # break 00:06:22.689 21:09:57 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:22.689 21:09:57 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:22.689 21:09:57 -- json_config/json_config.sh@120 -- # local app=target 00:06:22.689 21:09:57 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:22.689 21:09:57 -- json_config/json_config.sh@124 -- # [[ -n 1499671 ]] 00:06:22.689 21:09:57 -- json_config/json_config.sh@127 -- # kill -SIGINT 1499671 00:06:22.689 21:09:57 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:22.689 21:09:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:22.689 21:09:57 -- json_config/json_config.sh@130 -- # kill -0 1499671 00:06:22.689 21:09:57 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:23.258 21:09:57 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:23.258 21:09:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:23.258 21:09:57 -- json_config/json_config.sh@130 -- # kill -0 1499671 00:06:23.258 21:09:57 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:23.258 21:09:57 -- json_config/json_config.sh@132 -- # break 00:06:23.258 21:09:57 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:23.258 21:09:57 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:23.258 SPDK target shutdown done 00:06:23.258 21:09:57 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:23.258 INFO: relaunching applications... 00:06:23.258 21:09:57 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.258 21:09:57 -- json_config/json_config.sh@98 -- # local app=target 00:06:23.258 21:09:57 -- json_config/json_config.sh@99 -- # shift 00:06:23.258 21:09:57 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:23.258 21:09:57 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:23.258 21:09:57 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:23.258 21:09:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:23.258 21:09:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:23.258 21:09:57 -- json_config/json_config.sh@111 -- # app_pid[$app]=1505555 00:06:23.258 21:09:57 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:23.258 Waiting for target to run... 00:06:23.258 21:09:57 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.258 21:09:57 -- json_config/json_config.sh@114 -- # waitforlisten 1505555 /var/tmp/spdk_tgt.sock 00:06:23.258 21:09:57 -- common/autotest_common.sh@819 -- # '[' -z 1505555 ']' 00:06:23.258 21:09:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.258 21:09:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.258 21:09:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.258 21:09:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.258 21:09:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.258 [2024-07-26 21:09:57.956596] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:23.258 [2024-07-26 21:09:57.956668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505555 ] 00:06:23.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.826 [2024-07-26 21:09:58.413518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.826 [2024-07-26 21:09:58.440749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.826 [2024-07-26 21:09:58.440851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.111 [2024-07-26 21:10:01.471568] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x219b1c0/0x21c7dc0) succeed. 00:06:27.111 [2024-07-26 21:10:01.481953] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x219d360/0x2227dc0) succeed. 00:06:27.111 [2024-07-26 21:10:01.531584] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:27.370 21:10:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.370 21:10:02 -- common/autotest_common.sh@852 -- # return 0 00:06:27.370 21:10:02 -- json_config/json_config.sh@115 -- # echo '' 00:06:27.370 00:06:27.370 21:10:02 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:27.370 21:10:02 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:27.370 INFO: Checking if target configuration is the same... 00:06:27.370 21:10:02 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.370 21:10:02 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:27.370 21:10:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.370 + '[' 2 -ne 2 ']' 00:06:27.370 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:27.370 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:27.370 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:27.370 +++ basename /dev/fd/62 00:06:27.370 ++ mktemp /tmp/62.XXX 00:06:27.370 + tmp_file_1=/tmp/62.j8S 00:06:27.370 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.370 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:27.370 + tmp_file_2=/tmp/spdk_tgt_config.json.2Sd 00:06:27.370 + ret=0 00:06:27.370 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.628 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.628 + diff -u /tmp/62.j8S /tmp/spdk_tgt_config.json.2Sd 00:06:27.628 + echo 'INFO: JSON config files are the same' 00:06:27.628 INFO: JSON config files are the same 00:06:27.628 + rm /tmp/62.j8S /tmp/spdk_tgt_config.json.2Sd 00:06:27.628 + exit 0 00:06:27.628 21:10:02 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:27.629 21:10:02 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:27.629 INFO: changing configuration and checking if this can be detected... 00:06:27.629 21:10:02 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:27.629 21:10:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:27.887 21:10:02 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.887 21:10:02 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:27.887 21:10:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.887 + '[' 2 -ne 2 ']' 00:06:27.887 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:27.887 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:27.887 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:27.887 +++ basename /dev/fd/62 00:06:27.887 ++ mktemp /tmp/62.XXX 00:06:27.887 + tmp_file_1=/tmp/62.K1s 00:06:27.887 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.887 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:27.887 + tmp_file_2=/tmp/spdk_tgt_config.json.rpJ 00:06:27.887 + ret=0 00:06:27.887 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:28.146 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:28.146 + diff -u /tmp/62.K1s /tmp/spdk_tgt_config.json.rpJ 00:06:28.146 + ret=1 00:06:28.146 + echo '=== Start of file: /tmp/62.K1s ===' 00:06:28.146 + cat /tmp/62.K1s 00:06:28.146 + echo '=== End of file: /tmp/62.K1s ===' 00:06:28.146 + echo '' 00:06:28.146 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rpJ ===' 00:06:28.146 + cat /tmp/spdk_tgt_config.json.rpJ 00:06:28.146 + echo '=== End of file: /tmp/spdk_tgt_config.json.rpJ ===' 00:06:28.146 + echo '' 00:06:28.146 + rm /tmp/62.K1s /tmp/spdk_tgt_config.json.rpJ 00:06:28.146 + exit 1 00:06:28.146 21:10:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:28.146 INFO: configuration change detected. 00:06:28.146 21:10:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:28.147 21:10:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:28.147 21:10:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:28.147 21:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 21:10:02 -- json_config/json_config.sh@360 -- # local ret=0 00:06:28.147 21:10:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:28.147 21:10:02 -- json_config/json_config.sh@370 -- # [[ -n 1505555 ]] 00:06:28.147 21:10:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:28.147 21:10:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:28.147 21:10:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:28.147 21:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 21:10:02 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:28.147 21:10:02 -- json_config/json_config.sh@246 -- # uname -s 00:06:28.147 21:10:02 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:28.147 21:10:02 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:28.147 21:10:02 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:28.147 21:10:02 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:28.147 21:10:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:28.147 21:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 21:10:02 -- json_config/json_config.sh@376 -- # killprocess 1505555 00:06:28.147 21:10:02 -- common/autotest_common.sh@926 -- # '[' -z 1505555 ']' 00:06:28.147 21:10:02 -- common/autotest_common.sh@930 -- # kill -0 1505555 00:06:28.147 21:10:02 -- common/autotest_common.sh@931 -- # uname 00:06:28.147 21:10:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.147 21:10:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1505555 00:06:28.147 21:10:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.147 21:10:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.147 21:10:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1505555' 00:06:28.147 killing process with pid 1505555 00:06:28.147 21:10:02 -- common/autotest_common.sh@945 -- # kill 1505555 00:06:28.147 21:10:03 -- common/autotest_common.sh@950 -- # wait 1505555 00:06:30.720 21:10:05 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.720 21:10:05 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:30.720 21:10:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:30.720 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:30.720 21:10:05 -- json_config/json_config.sh@381 -- # return 0 00:06:30.720 21:10:05 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:30.720 INFO: Success 00:06:30.720 21:10:05 -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:30.720 21:10:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:30.720 21:10:05 -- nvmf/common.sh@116 -- # sync 00:06:30.720 21:10:05 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:06:30.720 21:10:05 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:06:30.720 21:10:05 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:30.720 21:10:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:30.720 21:10:05 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:06:30.720 00:06:30.720 real 0m25.473s 00:06:30.720 user 0m28.408s 00:06:30.720 sys 0m8.848s 00:06:30.720 21:10:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.720 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:30.720 ************************************ 00:06:30.720 END TEST json_config 00:06:30.720 ************************************ 00:06:30.720 21:10:05 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:30.720 21:10:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.720 21:10:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.720 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:30.720 ************************************ 00:06:30.720 START TEST json_config_extra_key 00:06:30.720 ************************************ 00:06:30.720 21:10:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:30.721 21:10:05 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.721 21:10:05 -- nvmf/common.sh@7 -- # uname -s 00:06:30.721 21:10:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.721 21:10:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.721 21:10:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.721 21:10:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.721 21:10:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.721 21:10:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.721 21:10:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.721 21:10:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.721 21:10:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.721 21:10:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.980 21:10:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:30.980 21:10:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:30.980 21:10:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.980 21:10:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.980 21:10:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.980 21:10:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:30.980 21:10:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.980 21:10:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.980 21:10:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.980 21:10:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.980 21:10:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.980 21:10:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.980 21:10:05 -- paths/export.sh@5 -- # export PATH 00:06:30.980 21:10:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.980 21:10:05 -- nvmf/common.sh@46 -- # : 0 00:06:30.980 21:10:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:30.980 21:10:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:30.980 21:10:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:30.980 21:10:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.980 21:10:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.980 21:10:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:30.980 21:10:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:30.980 21:10:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:30.980 INFO: launching applications... 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1507028 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:30.980 Waiting for target to run... 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1507028 /var/tmp/spdk_tgt.sock 00:06:30.980 21:10:05 -- common/autotest_common.sh@819 -- # '[' -z 1507028 ']' 00:06:30.980 21:10:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.980 21:10:05 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:30.980 21:10:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.980 21:10:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.980 21:10:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.980 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:30.980 [2024-07-26 21:10:05.660013] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:30.980 [2024-07-26 21:10:05.660070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507028 ] 00:06:30.980 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.239 [2024-07-26 21:10:05.964824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.239 [2024-07-26 21:10:05.984527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.239 [2024-07-26 21:10:05.984637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.808 21:10:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.808 21:10:06 -- common/autotest_common.sh@852 -- # return 0 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:31.808 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:31.808 INFO: shutting down applications... 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1507028 ]] 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1507028 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1507028 00:06:31.808 21:10:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1507028 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:32.377 SPDK target shutdown done 00:06:32.377 21:10:06 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:32.377 Success 00:06:32.377 00:06:32.377 real 0m1.434s 00:06:32.377 user 0m1.116s 00:06:32.377 sys 0m0.432s 00:06:32.377 21:10:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.377 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.377 ************************************ 00:06:32.377 END TEST json_config_extra_key 00:06:32.377 ************************************ 00:06:32.377 21:10:06 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:32.377 21:10:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.377 21:10:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.377 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.377 ************************************ 00:06:32.377 START TEST alias_rpc 00:06:32.377 ************************************ 00:06:32.377 21:10:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:32.377 * Looking for test storage... 00:06:32.377 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:32.377 21:10:07 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.377 21:10:07 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1507325 00:06:32.377 21:10:07 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1507325 00:06:32.377 21:10:07 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.377 21:10:07 -- common/autotest_common.sh@819 -- # '[' -z 1507325 ']' 00:06:32.377 21:10:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.377 21:10:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.377 21:10:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.377 21:10:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.377 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:06:32.377 [2024-07-26 21:10:07.124681] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:32.377 [2024-07-26 21:10:07.124739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507325 ] 00:06:32.377 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.377 [2024-07-26 21:10:07.208534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.377 [2024-07-26 21:10:07.245914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.377 [2024-07-26 21:10:07.246027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.315 21:10:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.315 21:10:07 -- common/autotest_common.sh@852 -- # return 0 00:06:33.315 21:10:07 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:33.315 21:10:08 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1507325 00:06:33.315 21:10:08 -- common/autotest_common.sh@926 -- # '[' -z 1507325 ']' 00:06:33.315 21:10:08 -- common/autotest_common.sh@930 -- # kill -0 1507325 00:06:33.315 21:10:08 -- common/autotest_common.sh@931 -- # uname 00:06:33.315 21:10:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:33.315 21:10:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1507325 00:06:33.315 21:10:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:33.315 21:10:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:33.315 21:10:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1507325' 00:06:33.315 killing process with pid 1507325 00:06:33.315 21:10:08 -- common/autotest_common.sh@945 -- # kill 1507325 00:06:33.315 21:10:08 -- common/autotest_common.sh@950 -- # wait 1507325 00:06:33.883 00:06:33.883 real 0m1.480s 00:06:33.883 user 0m1.578s 00:06:33.883 sys 0m0.449s 00:06:33.883 21:10:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.883 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:06:33.883 ************************************ 00:06:33.883 END TEST alias_rpc 00:06:33.883 ************************************ 00:06:33.883 21:10:08 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:33.883 21:10:08 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:33.883 21:10:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.883 21:10:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.883 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:06:33.883 ************************************ 00:06:33.883 START TEST spdkcli_tcp 00:06:33.883 ************************************ 00:06:33.883 21:10:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:33.883 * Looking for test storage... 00:06:33.883 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:33.883 21:10:08 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:33.883 21:10:08 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:33.883 21:10:08 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:33.883 21:10:08 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:33.883 21:10:08 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:33.883 21:10:08 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:33.883 21:10:08 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:33.883 21:10:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:33.883 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:06:33.883 21:10:08 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1507638 00:06:33.884 21:10:08 -- spdkcli/tcp.sh@27 -- # waitforlisten 1507638 00:06:33.884 21:10:08 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:33.884 21:10:08 -- common/autotest_common.sh@819 -- # '[' -z 1507638 ']' 00:06:33.884 21:10:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.884 21:10:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.884 21:10:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.884 21:10:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.884 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:06:33.884 [2024-07-26 21:10:08.661016] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:33.884 [2024-07-26 21:10:08.661076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507638 ] 00:06:33.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.884 [2024-07-26 21:10:08.747370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.143 [2024-07-26 21:10:08.785763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.143 [2024-07-26 21:10:08.785899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.143 [2024-07-26 21:10:08.785903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.711 21:10:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.711 21:10:09 -- common/autotest_common.sh@852 -- # return 0 00:06:34.711 21:10:09 -- spdkcli/tcp.sh@31 -- # socat_pid=1507686 00:06:34.711 21:10:09 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:34.711 21:10:09 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:34.970 [ 00:06:34.970 "bdev_malloc_delete", 00:06:34.970 "bdev_malloc_create", 00:06:34.970 "bdev_null_resize", 00:06:34.970 "bdev_null_delete", 00:06:34.970 "bdev_null_create", 00:06:34.970 "bdev_nvme_cuse_unregister", 00:06:34.970 "bdev_nvme_cuse_register", 00:06:34.970 "bdev_opal_new_user", 00:06:34.970 "bdev_opal_set_lock_state", 00:06:34.970 "bdev_opal_delete", 00:06:34.970 "bdev_opal_get_info", 00:06:34.970 "bdev_opal_create", 00:06:34.970 "bdev_nvme_opal_revert", 00:06:34.970 "bdev_nvme_opal_init", 00:06:34.970 "bdev_nvme_send_cmd", 00:06:34.970 "bdev_nvme_get_path_iostat", 00:06:34.970 "bdev_nvme_get_mdns_discovery_info", 00:06:34.970 "bdev_nvme_stop_mdns_discovery", 00:06:34.970 "bdev_nvme_start_mdns_discovery", 00:06:34.970 "bdev_nvme_set_multipath_policy", 00:06:34.970 "bdev_nvme_set_preferred_path", 00:06:34.970 "bdev_nvme_get_io_paths", 00:06:34.970 "bdev_nvme_remove_error_injection", 00:06:34.971 "bdev_nvme_add_error_injection", 00:06:34.971 "bdev_nvme_get_discovery_info", 00:06:34.971 "bdev_nvme_stop_discovery", 00:06:34.971 "bdev_nvme_start_discovery", 00:06:34.971 "bdev_nvme_get_controller_health_info", 00:06:34.971 "bdev_nvme_disable_controller", 00:06:34.971 "bdev_nvme_enable_controller", 00:06:34.971 "bdev_nvme_reset_controller", 00:06:34.971 "bdev_nvme_get_transport_statistics", 00:06:34.971 "bdev_nvme_apply_firmware", 00:06:34.971 "bdev_nvme_detach_controller", 00:06:34.971 "bdev_nvme_get_controllers", 00:06:34.971 "bdev_nvme_attach_controller", 00:06:34.971 "bdev_nvme_set_hotplug", 00:06:34.971 "bdev_nvme_set_options", 00:06:34.971 "bdev_passthru_delete", 00:06:34.971 "bdev_passthru_create", 00:06:34.971 "bdev_lvol_grow_lvstore", 00:06:34.971 "bdev_lvol_get_lvols", 00:06:34.971 "bdev_lvol_get_lvstores", 00:06:34.971 "bdev_lvol_delete", 00:06:34.971 "bdev_lvol_set_read_only", 00:06:34.971 "bdev_lvol_resize", 00:06:34.971 "bdev_lvol_decouple_parent", 00:06:34.971 "bdev_lvol_inflate", 00:06:34.971 "bdev_lvol_rename", 00:06:34.971 "bdev_lvol_clone_bdev", 00:06:34.971 "bdev_lvol_clone", 00:06:34.971 "bdev_lvol_snapshot", 00:06:34.971 "bdev_lvol_create", 00:06:34.971 "bdev_lvol_delete_lvstore", 00:06:34.971 "bdev_lvol_rename_lvstore", 00:06:34.971 "bdev_lvol_create_lvstore", 00:06:34.971 "bdev_raid_set_options", 00:06:34.971 "bdev_raid_remove_base_bdev", 00:06:34.971 "bdev_raid_add_base_bdev", 00:06:34.971 "bdev_raid_delete", 00:06:34.971 "bdev_raid_create", 00:06:34.971 "bdev_raid_get_bdevs", 00:06:34.971 "bdev_error_inject_error", 00:06:34.971 "bdev_error_delete", 00:06:34.971 "bdev_error_create", 00:06:34.971 "bdev_split_delete", 00:06:34.971 "bdev_split_create", 00:06:34.971 "bdev_delay_delete", 00:06:34.971 "bdev_delay_create", 00:06:34.971 "bdev_delay_update_latency", 00:06:34.971 "bdev_zone_block_delete", 00:06:34.971 "bdev_zone_block_create", 00:06:34.971 "blobfs_create", 00:06:34.971 "blobfs_detect", 00:06:34.971 "blobfs_set_cache_size", 00:06:34.971 "bdev_aio_delete", 00:06:34.971 "bdev_aio_rescan", 00:06:34.971 "bdev_aio_create", 00:06:34.971 "bdev_ftl_set_property", 00:06:34.971 "bdev_ftl_get_properties", 00:06:34.971 "bdev_ftl_get_stats", 00:06:34.971 "bdev_ftl_unmap", 00:06:34.971 "bdev_ftl_unload", 00:06:34.971 "bdev_ftl_delete", 00:06:34.971 "bdev_ftl_load", 00:06:34.971 "bdev_ftl_create", 00:06:34.971 "bdev_virtio_attach_controller", 00:06:34.971 "bdev_virtio_scsi_get_devices", 00:06:34.971 "bdev_virtio_detach_controller", 00:06:34.971 "bdev_virtio_blk_set_hotplug", 00:06:34.971 "bdev_iscsi_delete", 00:06:34.971 "bdev_iscsi_create", 00:06:34.971 "bdev_iscsi_set_options", 00:06:34.971 "accel_error_inject_error", 00:06:34.971 "ioat_scan_accel_module", 00:06:34.971 "dsa_scan_accel_module", 00:06:34.971 "iaa_scan_accel_module", 00:06:34.971 "iscsi_set_options", 00:06:34.971 "iscsi_get_auth_groups", 00:06:34.971 "iscsi_auth_group_remove_secret", 00:06:34.971 "iscsi_auth_group_add_secret", 00:06:34.971 "iscsi_delete_auth_group", 00:06:34.971 "iscsi_create_auth_group", 00:06:34.971 "iscsi_set_discovery_auth", 00:06:34.971 "iscsi_get_options", 00:06:34.971 "iscsi_target_node_request_logout", 00:06:34.971 "iscsi_target_node_set_redirect", 00:06:34.971 "iscsi_target_node_set_auth", 00:06:34.971 "iscsi_target_node_add_lun", 00:06:34.971 "iscsi_get_connections", 00:06:34.971 "iscsi_portal_group_set_auth", 00:06:34.971 "iscsi_start_portal_group", 00:06:34.971 "iscsi_delete_portal_group", 00:06:34.971 "iscsi_create_portal_group", 00:06:34.971 "iscsi_get_portal_groups", 00:06:34.971 "iscsi_delete_target_node", 00:06:34.971 "iscsi_target_node_remove_pg_ig_maps", 00:06:34.971 "iscsi_target_node_add_pg_ig_maps", 00:06:34.971 "iscsi_create_target_node", 00:06:34.971 "iscsi_get_target_nodes", 00:06:34.971 "iscsi_delete_initiator_group", 00:06:34.971 "iscsi_initiator_group_remove_initiators", 00:06:34.971 "iscsi_initiator_group_add_initiators", 00:06:34.971 "iscsi_create_initiator_group", 00:06:34.971 "iscsi_get_initiator_groups", 00:06:34.971 "nvmf_set_crdt", 00:06:34.971 "nvmf_set_config", 00:06:34.971 "nvmf_set_max_subsystems", 00:06:34.971 "nvmf_subsystem_get_listeners", 00:06:34.971 "nvmf_subsystem_get_qpairs", 00:06:34.971 "nvmf_subsystem_get_controllers", 00:06:34.971 "nvmf_get_stats", 00:06:34.971 "nvmf_get_transports", 00:06:34.971 "nvmf_create_transport", 00:06:34.971 "nvmf_get_targets", 00:06:34.971 "nvmf_delete_target", 00:06:34.971 "nvmf_create_target", 00:06:34.971 "nvmf_subsystem_allow_any_host", 00:06:34.971 "nvmf_subsystem_remove_host", 00:06:34.971 "nvmf_subsystem_add_host", 00:06:34.971 "nvmf_subsystem_remove_ns", 00:06:34.971 "nvmf_subsystem_add_ns", 00:06:34.971 "nvmf_subsystem_listener_set_ana_state", 00:06:34.971 "nvmf_discovery_get_referrals", 00:06:34.971 "nvmf_discovery_remove_referral", 00:06:34.971 "nvmf_discovery_add_referral", 00:06:34.971 "nvmf_subsystem_remove_listener", 00:06:34.971 "nvmf_subsystem_add_listener", 00:06:34.971 "nvmf_delete_subsystem", 00:06:34.971 "nvmf_create_subsystem", 00:06:34.971 "nvmf_get_subsystems", 00:06:34.971 "env_dpdk_get_mem_stats", 00:06:34.971 "nbd_get_disks", 00:06:34.971 "nbd_stop_disk", 00:06:34.971 "nbd_start_disk", 00:06:34.971 "ublk_recover_disk", 00:06:34.971 "ublk_get_disks", 00:06:34.971 "ublk_stop_disk", 00:06:34.971 "ublk_start_disk", 00:06:34.971 "ublk_destroy_target", 00:06:34.971 "ublk_create_target", 00:06:34.971 "virtio_blk_create_transport", 00:06:34.971 "virtio_blk_get_transports", 00:06:34.971 "vhost_controller_set_coalescing", 00:06:34.971 "vhost_get_controllers", 00:06:34.971 "vhost_delete_controller", 00:06:34.971 "vhost_create_blk_controller", 00:06:34.971 "vhost_scsi_controller_remove_target", 00:06:34.971 "vhost_scsi_controller_add_target", 00:06:34.971 "vhost_start_scsi_controller", 00:06:34.971 "vhost_create_scsi_controller", 00:06:34.971 "thread_set_cpumask", 00:06:34.971 "framework_get_scheduler", 00:06:34.971 "framework_set_scheduler", 00:06:34.971 "framework_get_reactors", 00:06:34.971 "thread_get_io_channels", 00:06:34.971 "thread_get_pollers", 00:06:34.971 "thread_get_stats", 00:06:34.971 "framework_monitor_context_switch", 00:06:34.971 "spdk_kill_instance", 00:06:34.971 "log_enable_timestamps", 00:06:34.971 "log_get_flags", 00:06:34.971 "log_clear_flag", 00:06:34.971 "log_set_flag", 00:06:34.971 "log_get_level", 00:06:34.971 "log_set_level", 00:06:34.971 "log_get_print_level", 00:06:34.971 "log_set_print_level", 00:06:34.971 "framework_enable_cpumask_locks", 00:06:34.971 "framework_disable_cpumask_locks", 00:06:34.971 "framework_wait_init", 00:06:34.971 "framework_start_init", 00:06:34.971 "scsi_get_devices", 00:06:34.971 "bdev_get_histogram", 00:06:34.971 "bdev_enable_histogram", 00:06:34.971 "bdev_set_qos_limit", 00:06:34.971 "bdev_set_qd_sampling_period", 00:06:34.971 "bdev_get_bdevs", 00:06:34.971 "bdev_reset_iostat", 00:06:34.971 "bdev_get_iostat", 00:06:34.971 "bdev_examine", 00:06:34.971 "bdev_wait_for_examine", 00:06:34.971 "bdev_set_options", 00:06:34.971 "notify_get_notifications", 00:06:34.971 "notify_get_types", 00:06:34.971 "accel_get_stats", 00:06:34.971 "accel_set_options", 00:06:34.971 "accel_set_driver", 00:06:34.971 "accel_crypto_key_destroy", 00:06:34.971 "accel_crypto_keys_get", 00:06:34.971 "accel_crypto_key_create", 00:06:34.971 "accel_assign_opc", 00:06:34.971 "accel_get_module_info", 00:06:34.971 "accel_get_opc_assignments", 00:06:34.971 "vmd_rescan", 00:06:34.971 "vmd_remove_device", 00:06:34.971 "vmd_enable", 00:06:34.971 "sock_set_default_impl", 00:06:34.971 "sock_impl_set_options", 00:06:34.971 "sock_impl_get_options", 00:06:34.971 "iobuf_get_stats", 00:06:34.971 "iobuf_set_options", 00:06:34.971 "framework_get_pci_devices", 00:06:34.971 "framework_get_config", 00:06:34.971 "framework_get_subsystems", 00:06:34.971 "trace_get_info", 00:06:34.971 "trace_get_tpoint_group_mask", 00:06:34.971 "trace_disable_tpoint_group", 00:06:34.971 "trace_enable_tpoint_group", 00:06:34.971 "trace_clear_tpoint_mask", 00:06:34.971 "trace_set_tpoint_mask", 00:06:34.971 "spdk_get_version", 00:06:34.971 "rpc_get_methods" 00:06:34.971 ] 00:06:34.971 21:10:09 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:34.971 21:10:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:34.971 21:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:34.971 21:10:09 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:34.971 21:10:09 -- spdkcli/tcp.sh@38 -- # killprocess 1507638 00:06:34.971 21:10:09 -- common/autotest_common.sh@926 -- # '[' -z 1507638 ']' 00:06:34.971 21:10:09 -- common/autotest_common.sh@930 -- # kill -0 1507638 00:06:34.971 21:10:09 -- common/autotest_common.sh@931 -- # uname 00:06:34.971 21:10:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.971 21:10:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1507638 00:06:34.971 21:10:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.971 21:10:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.972 21:10:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1507638' 00:06:34.972 killing process with pid 1507638 00:06:34.972 21:10:09 -- common/autotest_common.sh@945 -- # kill 1507638 00:06:34.972 21:10:09 -- common/autotest_common.sh@950 -- # wait 1507638 00:06:35.231 00:06:35.231 real 0m1.490s 00:06:35.231 user 0m2.715s 00:06:35.231 sys 0m0.504s 00:06:35.231 21:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.231 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:35.231 ************************************ 00:06:35.231 END TEST spdkcli_tcp 00:06:35.231 ************************************ 00:06:35.231 21:10:10 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:35.231 21:10:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.231 21:10:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.231 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:35.231 ************************************ 00:06:35.231 START TEST dpdk_mem_utility 00:06:35.231 ************************************ 00:06:35.231 21:10:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:35.490 * Looking for test storage... 00:06:35.490 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:35.490 21:10:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:35.490 21:10:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1507984 00:06:35.490 21:10:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1507984 00:06:35.490 21:10:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.490 21:10:10 -- common/autotest_common.sh@819 -- # '[' -z 1507984 ']' 00:06:35.490 21:10:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.490 21:10:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.490 21:10:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.490 21:10:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.490 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:35.490 [2024-07-26 21:10:10.197916] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:35.490 [2024-07-26 21:10:10.197972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507984 ] 00:06:35.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.490 [2024-07-26 21:10:10.282957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.491 [2024-07-26 21:10:10.320418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.491 [2024-07-26 21:10:10.320530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.429 21:10:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.429 21:10:10 -- common/autotest_common.sh@852 -- # return 0 00:06:36.429 21:10:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:36.429 21:10:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:36.429 21:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:36.429 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.429 { 00:06:36.429 "filename": "/tmp/spdk_mem_dump.txt" 00:06:36.429 } 00:06:36.429 21:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:36.429 21:10:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:36.429 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:36.429 1 heaps totaling size 814.000000 MiB 00:06:36.429 size: 814.000000 MiB heap id: 0 00:06:36.429 end heaps---------- 00:06:36.429 8 mempools totaling size 598.116089 MiB 00:06:36.429 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:36.429 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:36.429 size: 84.521057 MiB name: bdev_io_1507984 00:06:36.429 size: 51.011292 MiB name: evtpool_1507984 00:06:36.429 size: 50.003479 MiB name: msgpool_1507984 00:06:36.429 size: 21.763794 MiB name: PDU_Pool 00:06:36.429 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:36.429 size: 0.026123 MiB name: Session_Pool 00:06:36.429 end mempools------- 00:06:36.429 6 memzones totaling size 4.142822 MiB 00:06:36.429 size: 1.000366 MiB name: RG_ring_0_1507984 00:06:36.429 size: 1.000366 MiB name: RG_ring_1_1507984 00:06:36.429 size: 1.000366 MiB name: RG_ring_4_1507984 00:06:36.429 size: 1.000366 MiB name: RG_ring_5_1507984 00:06:36.429 size: 0.125366 MiB name: RG_ring_2_1507984 00:06:36.429 size: 0.015991 MiB name: RG_ring_3_1507984 00:06:36.429 end memzones------- 00:06:36.429 21:10:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:36.430 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:36.430 list of free elements. size: 12.519348 MiB 00:06:36.430 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:36.430 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:36.430 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:36.430 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:36.430 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:36.430 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:36.430 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:36.430 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:36.430 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:36.430 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:36.430 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:36.430 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:36.430 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:36.430 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:36.430 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:36.430 list of standard malloc elements. size: 199.218079 MiB 00:06:36.430 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:36.430 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:36.430 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:36.430 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:36.430 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:36.430 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:36.430 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:36.430 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:36.430 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:36.430 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:36.430 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:36.430 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:36.430 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:36.430 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:36.430 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:36.430 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:36.430 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:36.430 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:36.430 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:36.430 list of memzone associated elements. size: 602.262573 MiB 00:06:36.430 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:36.430 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:36.430 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:36.430 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:36.430 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:36.430 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1507984_0 00:06:36.430 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:36.430 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1507984_0 00:06:36.430 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:36.430 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1507984_0 00:06:36.430 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:36.430 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:36.430 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:36.430 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:36.430 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:36.430 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1507984 00:06:36.430 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:36.430 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1507984 00:06:36.430 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:36.430 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1507984 00:06:36.430 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:36.430 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:36.430 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:36.430 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:36.430 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:36.430 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:36.430 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:36.430 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:36.430 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:36.430 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1507984 00:06:36.430 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:36.430 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1507984 00:06:36.430 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:36.430 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1507984 00:06:36.430 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:36.430 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1507984 00:06:36.430 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:36.430 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1507984 00:06:36.430 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:36.430 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:36.430 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:36.430 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:36.430 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:36.430 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:36.430 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:36.430 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1507984 00:06:36.430 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:36.430 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:36.430 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:36.430 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:36.430 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:36.430 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1507984 00:06:36.430 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:36.430 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:36.430 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:36.430 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1507984 00:06:36.430 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:36.430 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1507984 00:06:36.430 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:36.430 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:36.430 21:10:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:36.430 21:10:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1507984 00:06:36.430 21:10:11 -- common/autotest_common.sh@926 -- # '[' -z 1507984 ']' 00:06:36.430 21:10:11 -- common/autotest_common.sh@930 -- # kill -0 1507984 00:06:36.430 21:10:11 -- common/autotest_common.sh@931 -- # uname 00:06:36.430 21:10:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.430 21:10:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1507984 00:06:36.430 21:10:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.430 21:10:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.430 21:10:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1507984' 00:06:36.430 killing process with pid 1507984 00:06:36.430 21:10:11 -- common/autotest_common.sh@945 -- # kill 1507984 00:06:36.430 21:10:11 -- common/autotest_common.sh@950 -- # wait 1507984 00:06:36.690 00:06:36.690 real 0m1.380s 00:06:36.690 user 0m1.400s 00:06:36.690 sys 0m0.443s 00:06:36.690 21:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.690 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:36.690 ************************************ 00:06:36.690 END TEST dpdk_mem_utility 00:06:36.690 ************************************ 00:06:36.690 21:10:11 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:36.690 21:10:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.690 21:10:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.690 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:36.690 ************************************ 00:06:36.690 START TEST event 00:06:36.690 ************************************ 00:06:36.690 21:10:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:36.690 * Looking for test storage... 00:06:36.690 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:36.690 21:10:11 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:36.690 21:10:11 -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.950 21:10:11 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.950 21:10:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:36.950 21:10:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.950 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:36.950 ************************************ 00:06:36.950 START TEST event_perf 00:06:36.950 ************************************ 00:06:36.950 21:10:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.950 Running I/O for 1 seconds...[2024-07-26 21:10:11.588069] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:36.950 [2024-07-26 21:10:11.588164] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508232 ] 00:06:36.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.950 [2024-07-26 21:10:11.674783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.950 [2024-07-26 21:10:11.713743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.950 [2024-07-26 21:10:11.713840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.950 [2024-07-26 21:10:11.713903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.950 [2024-07-26 21:10:11.713905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.329 Running I/O for 1 seconds... 00:06:38.329 lcore 0: 208106 00:06:38.329 lcore 1: 208107 00:06:38.329 lcore 2: 208107 00:06:38.329 lcore 3: 208107 00:06:38.329 done. 00:06:38.329 00:06:38.329 real 0m1.206s 00:06:38.329 user 0m4.101s 00:06:38.329 sys 0m0.104s 00:06:38.329 21:10:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.329 21:10:12 -- common/autotest_common.sh@10 -- # set +x 00:06:38.329 ************************************ 00:06:38.329 END TEST event_perf 00:06:38.329 ************************************ 00:06:38.330 21:10:12 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:38.330 21:10:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:38.330 21:10:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.330 21:10:12 -- common/autotest_common.sh@10 -- # set +x 00:06:38.330 ************************************ 00:06:38.330 START TEST event_reactor 00:06:38.330 ************************************ 00:06:38.330 21:10:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:38.330 [2024-07-26 21:10:12.832641] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:38.330 [2024-07-26 21:10:12.832730] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508380 ] 00:06:38.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.330 [2024-07-26 21:10:12.919811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.330 [2024-07-26 21:10:12.954798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.269 test_start 00:06:39.269 oneshot 00:06:39.269 tick 100 00:06:39.269 tick 100 00:06:39.269 tick 250 00:06:39.269 tick 100 00:06:39.269 tick 100 00:06:39.269 tick 100 00:06:39.269 tick 250 00:06:39.269 tick 500 00:06:39.269 tick 100 00:06:39.269 tick 100 00:06:39.269 tick 250 00:06:39.269 tick 100 00:06:39.269 tick 100 00:06:39.269 test_end 00:06:39.269 00:06:39.269 real 0m1.199s 00:06:39.269 user 0m1.096s 00:06:39.269 sys 0m0.099s 00:06:39.269 21:10:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.270 21:10:14 -- common/autotest_common.sh@10 -- # set +x 00:06:39.270 ************************************ 00:06:39.270 END TEST event_reactor 00:06:39.270 ************************************ 00:06:39.270 21:10:14 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.270 21:10:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:39.270 21:10:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.270 21:10:14 -- common/autotest_common.sh@10 -- # set +x 00:06:39.270 ************************************ 00:06:39.270 START TEST event_reactor_perf 00:06:39.270 ************************************ 00:06:39.270 21:10:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.270 [2024-07-26 21:10:14.076648] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:39.270 [2024-07-26 21:10:14.076737] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508658 ] 00:06:39.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.529 [2024-07-26 21:10:14.163806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.529 [2024-07-26 21:10:14.198431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.466 test_start 00:06:40.466 test_end 00:06:40.466 Performance: 519008 events per second 00:06:40.466 00:06:40.466 real 0m1.202s 00:06:40.466 user 0m1.099s 00:06:40.466 sys 0m0.099s 00:06:40.466 21:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.466 21:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:40.466 ************************************ 00:06:40.466 END TEST event_reactor_perf 00:06:40.466 ************************************ 00:06:40.466 21:10:15 -- event/event.sh@49 -- # uname -s 00:06:40.466 21:10:15 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:40.466 21:10:15 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:40.466 21:10:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.466 21:10:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.466 21:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:40.466 ************************************ 00:06:40.466 START TEST event_scheduler 00:06:40.466 ************************************ 00:06:40.466 21:10:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:40.725 * Looking for test storage... 00:06:40.725 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:40.725 21:10:15 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:40.725 21:10:15 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1508966 00:06:40.725 21:10:15 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.725 21:10:15 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:40.725 21:10:15 -- scheduler/scheduler.sh@37 -- # waitforlisten 1508966 00:06:40.725 21:10:15 -- common/autotest_common.sh@819 -- # '[' -z 1508966 ']' 00:06:40.725 21:10:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.725 21:10:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.725 21:10:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.725 21:10:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.725 21:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:40.725 [2024-07-26 21:10:15.451159] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:40.725 [2024-07-26 21:10:15.451214] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508966 ] 00:06:40.725 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.725 [2024-07-26 21:10:15.530680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.725 [2024-07-26 21:10:15.568923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.725 [2024-07-26 21:10:15.569009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.725 [2024-07-26 21:10:15.569100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.725 [2024-07-26 21:10:15.569112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.662 21:10:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.662 21:10:16 -- common/autotest_common.sh@852 -- # return 0 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 POWER: Env isn't set yet! 00:06:41.662 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:41.662 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.662 POWER: Cannot set governor of lcore 0 to userspace 00:06:41.662 POWER: Attempting to initialise PSTAT power management... 00:06:41.662 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:41.662 POWER: Initialized successfully for lcore 0 power management 00:06:41.662 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:41.662 POWER: Initialized successfully for lcore 1 power management 00:06:41.662 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:41.662 POWER: Initialized successfully for lcore 2 power management 00:06:41.662 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:41.662 POWER: Initialized successfully for lcore 3 power management 00:06:41.662 [2024-07-26 21:10:16.299841] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:41.662 [2024-07-26 21:10:16.299857] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:41.662 [2024-07-26 21:10:16.299869] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 [2024-07-26 21:10:16.367610] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:41.662 21:10:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.662 21:10:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 ************************************ 00:06:41.662 START TEST scheduler_create_thread 00:06:41.662 ************************************ 00:06:41.662 21:10:16 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 2 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 3 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 4 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 5 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 6 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 7 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 8 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:41.662 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.662 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 9 00:06:41.662 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.662 21:10:16 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:41.663 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.663 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.663 10 00:06:41.663 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.663 21:10:16 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:41.663 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.663 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.663 21:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.663 21:10:16 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:41.663 21:10:16 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:41.663 21:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.663 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:42.601 21:10:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.601 21:10:17 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:42.601 21:10:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.601 21:10:17 -- common/autotest_common.sh@10 -- # set +x 00:06:43.978 21:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.978 21:10:18 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:43.978 21:10:18 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:43.978 21:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.978 21:10:18 -- common/autotest_common.sh@10 -- # set +x 00:06:44.914 21:10:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.914 00:06:44.914 real 0m3.382s 00:06:44.914 user 0m0.023s 00:06:44.914 sys 0m0.006s 00:06:44.914 21:10:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.914 21:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:44.914 ************************************ 00:06:44.914 END TEST scheduler_create_thread 00:06:44.914 ************************************ 00:06:45.172 21:10:19 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:45.172 21:10:19 -- scheduler/scheduler.sh@46 -- # killprocess 1508966 00:06:45.172 21:10:19 -- common/autotest_common.sh@926 -- # '[' -z 1508966 ']' 00:06:45.172 21:10:19 -- common/autotest_common.sh@930 -- # kill -0 1508966 00:06:45.172 21:10:19 -- common/autotest_common.sh@931 -- # uname 00:06:45.172 21:10:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.173 21:10:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1508966 00:06:45.173 21:10:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:45.173 21:10:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:45.173 21:10:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1508966' 00:06:45.173 killing process with pid 1508966 00:06:45.173 21:10:19 -- common/autotest_common.sh@945 -- # kill 1508966 00:06:45.173 21:10:19 -- common/autotest_common.sh@950 -- # wait 1508966 00:06:45.431 [2024-07-26 21:10:20.139488] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:45.431 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:45.432 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:45.432 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:45.432 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:45.432 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:45.432 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:45.432 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:45.432 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:45.691 00:06:45.691 real 0m5.053s 00:06:45.691 user 0m10.412s 00:06:45.691 sys 0m0.418s 00:06:45.691 21:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.691 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:45.691 ************************************ 00:06:45.691 END TEST event_scheduler 00:06:45.691 ************************************ 00:06:45.691 21:10:20 -- event/event.sh@51 -- # modprobe -n nbd 00:06:45.691 21:10:20 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:45.691 21:10:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.691 21:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.691 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:45.691 ************************************ 00:06:45.691 START TEST app_repeat 00:06:45.691 ************************************ 00:06:45.691 21:10:20 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:45.691 21:10:20 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.691 21:10:20 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.691 21:10:20 -- event/event.sh@13 -- # local nbd_list 00:06:45.691 21:10:20 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.691 21:10:20 -- event/event.sh@14 -- # local bdev_list 00:06:45.691 21:10:20 -- event/event.sh@15 -- # local repeat_times=4 00:06:45.691 21:10:20 -- event/event.sh@17 -- # modprobe nbd 00:06:45.691 21:10:20 -- event/event.sh@19 -- # repeat_pid=1509833 00:06:45.691 21:10:20 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.691 21:10:20 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1509833' 00:06:45.691 Process app_repeat pid: 1509833 00:06:45.691 21:10:20 -- event/event.sh@23 -- # for i in {0..2} 00:06:45.691 21:10:20 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:45.691 21:10:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:45.691 spdk_app_start Round 0 00:06:45.691 21:10:20 -- event/event.sh@25 -- # waitforlisten 1509833 /var/tmp/spdk-nbd.sock 00:06:45.691 21:10:20 -- common/autotest_common.sh@819 -- # '[' -z 1509833 ']' 00:06:45.691 21:10:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.691 21:10:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.691 21:10:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.691 21:10:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.691 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:45.691 [2024-07-26 21:10:20.442468] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:45.691 [2024-07-26 21:10:20.442542] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509833 ] 00:06:45.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.691 [2024-07-26 21:10:20.531268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.950 [2024-07-26 21:10:20.569483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.950 [2024-07-26 21:10:20.569487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.517 21:10:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.517 21:10:21 -- common/autotest_common.sh@852 -- # return 0 00:06:46.517 21:10:21 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.776 Malloc0 00:06:46.776 21:10:21 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.776 Malloc1 00:06:46.776 21:10:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@12 -- # local i 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.776 21:10:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.035 /dev/nbd0 00:06:47.035 21:10:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.035 21:10:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.035 21:10:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:47.035 21:10:21 -- common/autotest_common.sh@857 -- # local i 00:06:47.035 21:10:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:47.035 21:10:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:47.035 21:10:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:47.035 21:10:21 -- common/autotest_common.sh@861 -- # break 00:06:47.035 21:10:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:47.035 21:10:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:47.035 21:10:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.035 1+0 records in 00:06:47.035 1+0 records out 00:06:47.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210758 s, 19.4 MB/s 00:06:47.035 21:10:21 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:47.035 21:10:21 -- common/autotest_common.sh@874 -- # size=4096 00:06:47.035 21:10:21 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:47.035 21:10:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:47.035 21:10:21 -- common/autotest_common.sh@877 -- # return 0 00:06:47.035 21:10:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.035 21:10:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.035 21:10:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.294 /dev/nbd1 00:06:47.294 21:10:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.294 21:10:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.294 21:10:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:47.294 21:10:21 -- common/autotest_common.sh@857 -- # local i 00:06:47.294 21:10:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:47.294 21:10:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:47.294 21:10:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:47.294 21:10:22 -- common/autotest_common.sh@861 -- # break 00:06:47.294 21:10:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:47.294 21:10:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:47.294 21:10:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.294 1+0 records in 00:06:47.294 1+0 records out 00:06:47.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177869 s, 23.0 MB/s 00:06:47.294 21:10:22 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:47.294 21:10:22 -- common/autotest_common.sh@874 -- # size=4096 00:06:47.294 21:10:22 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:47.294 21:10:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:47.294 21:10:22 -- common/autotest_common.sh@877 -- # return 0 00:06:47.294 21:10:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.294 21:10:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.294 21:10:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.294 21:10:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.294 21:10:22 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.611 { 00:06:47.611 "nbd_device": "/dev/nbd0", 00:06:47.611 "bdev_name": "Malloc0" 00:06:47.611 }, 00:06:47.611 { 00:06:47.611 "nbd_device": "/dev/nbd1", 00:06:47.611 "bdev_name": "Malloc1" 00:06:47.611 } 00:06:47.611 ]' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.611 { 00:06:47.611 "nbd_device": "/dev/nbd0", 00:06:47.611 "bdev_name": "Malloc0" 00:06:47.611 }, 00:06:47.611 { 00:06:47.611 "nbd_device": "/dev/nbd1", 00:06:47.611 "bdev_name": "Malloc1" 00:06:47.611 } 00:06:47.611 ]' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.611 /dev/nbd1' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.611 /dev/nbd1' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.611 256+0 records in 00:06:47.611 256+0 records out 00:06:47.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113824 s, 92.1 MB/s 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.611 256+0 records in 00:06:47.611 256+0 records out 00:06:47.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194935 s, 53.8 MB/s 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.611 256+0 records in 00:06:47.611 256+0 records out 00:06:47.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184354 s, 56.9 MB/s 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.611 21:10:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.612 21:10:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.612 21:10:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.612 21:10:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.612 21:10:22 -- bdev/nbd_common.sh@51 -- # local i 00:06:47.612 21:10:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.612 21:10:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@41 -- # break 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@41 -- # break 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.880 21:10:22 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@65 -- # true 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.139 21:10:22 -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.139 21:10:22 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.398 21:10:23 -- event/event.sh@35 -- # sleep 3 00:06:48.656 [2024-07-26 21:10:23.272667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.656 [2024-07-26 21:10:23.305740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.656 [2024-07-26 21:10:23.305743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.656 [2024-07-26 21:10:23.346956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.656 [2024-07-26 21:10:23.346998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.946 21:10:26 -- event/event.sh@23 -- # for i in {0..2} 00:06:51.946 21:10:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:51.946 spdk_app_start Round 1 00:06:51.946 21:10:26 -- event/event.sh@25 -- # waitforlisten 1509833 /var/tmp/spdk-nbd.sock 00:06:51.946 21:10:26 -- common/autotest_common.sh@819 -- # '[' -z 1509833 ']' 00:06:51.946 21:10:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.946 21:10:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.946 21:10:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.946 21:10:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.946 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:51.946 21:10:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.946 21:10:26 -- common/autotest_common.sh@852 -- # return 0 00:06:51.946 21:10:26 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.946 Malloc0 00:06:51.946 21:10:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.946 Malloc1 00:06:51.946 21:10:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@12 -- # local i 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.946 21:10:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.946 /dev/nbd0 00:06:51.947 21:10:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.947 21:10:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.947 21:10:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:51.947 21:10:26 -- common/autotest_common.sh@857 -- # local i 00:06:51.947 21:10:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:51.947 21:10:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:51.947 21:10:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:51.947 21:10:26 -- common/autotest_common.sh@861 -- # break 00:06:51.947 21:10:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:51.947 21:10:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:51.947 21:10:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.947 1+0 records in 00:06:51.947 1+0 records out 00:06:51.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210339 s, 19.5 MB/s 00:06:51.947 21:10:26 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:51.947 21:10:26 -- common/autotest_common.sh@874 -- # size=4096 00:06:51.947 21:10:26 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:51.947 21:10:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:51.947 21:10:26 -- common/autotest_common.sh@877 -- # return 0 00:06:51.947 21:10:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.947 21:10:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.947 21:10:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.206 /dev/nbd1 00:06:52.206 21:10:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.206 21:10:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.206 21:10:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:52.206 21:10:26 -- common/autotest_common.sh@857 -- # local i 00:06:52.206 21:10:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:52.206 21:10:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:52.206 21:10:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:52.206 21:10:26 -- common/autotest_common.sh@861 -- # break 00:06:52.206 21:10:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:52.206 21:10:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:52.206 21:10:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.206 1+0 records in 00:06:52.206 1+0 records out 00:06:52.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228288 s, 17.9 MB/s 00:06:52.206 21:10:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:52.206 21:10:27 -- common/autotest_common.sh@874 -- # size=4096 00:06:52.206 21:10:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:52.206 21:10:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:52.206 21:10:27 -- common/autotest_common.sh@877 -- # return 0 00:06:52.206 21:10:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.206 21:10:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.206 21:10:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.206 21:10:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.206 21:10:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.465 21:10:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.465 { 00:06:52.465 "nbd_device": "/dev/nbd0", 00:06:52.465 "bdev_name": "Malloc0" 00:06:52.465 }, 00:06:52.465 { 00:06:52.465 "nbd_device": "/dev/nbd1", 00:06:52.465 "bdev_name": "Malloc1" 00:06:52.465 } 00:06:52.465 ]' 00:06:52.465 21:10:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.465 { 00:06:52.465 "nbd_device": "/dev/nbd0", 00:06:52.465 "bdev_name": "Malloc0" 00:06:52.465 }, 00:06:52.465 { 00:06:52.465 "nbd_device": "/dev/nbd1", 00:06:52.465 "bdev_name": "Malloc1" 00:06:52.465 } 00:06:52.465 ]' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.466 /dev/nbd1' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.466 /dev/nbd1' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.466 256+0 records in 00:06:52.466 256+0 records out 00:06:52.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107779 s, 97.3 MB/s 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.466 256+0 records in 00:06:52.466 256+0 records out 00:06:52.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191408 s, 54.8 MB/s 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.466 256+0 records in 00:06:52.466 256+0 records out 00:06:52.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204542 s, 51.3 MB/s 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@51 -- # local i 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.466 21:10:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@41 -- # break 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.725 21:10:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@41 -- # break 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.985 21:10:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@65 -- # true 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.244 21:10:27 -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.244 21:10:27 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.244 21:10:28 -- event/event.sh@35 -- # sleep 3 00:06:53.503 [2024-07-26 21:10:28.274749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.503 [2024-07-26 21:10:28.306926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.503 [2024-07-26 21:10:28.306928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.503 [2024-07-26 21:10:28.348303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.503 [2024-07-26 21:10:28.348344] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.796 21:10:31 -- event/event.sh@23 -- # for i in {0..2} 00:06:56.796 21:10:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:56.796 spdk_app_start Round 2 00:06:56.796 21:10:31 -- event/event.sh@25 -- # waitforlisten 1509833 /var/tmp/spdk-nbd.sock 00:06:56.796 21:10:31 -- common/autotest_common.sh@819 -- # '[' -z 1509833 ']' 00:06:56.796 21:10:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.796 21:10:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:56.796 21:10:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.796 21:10:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:56.796 21:10:31 -- common/autotest_common.sh@10 -- # set +x 00:06:56.796 21:10:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:56.796 21:10:31 -- common/autotest_common.sh@852 -- # return 0 00:06:56.796 21:10:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.796 Malloc0 00:06:56.796 21:10:31 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.796 Malloc1 00:06:56.796 21:10:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@12 -- # local i 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.796 21:10:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.056 /dev/nbd0 00:06:57.056 21:10:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.056 21:10:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.056 21:10:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:57.056 21:10:31 -- common/autotest_common.sh@857 -- # local i 00:06:57.056 21:10:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:57.056 21:10:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:57.056 21:10:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:57.056 21:10:31 -- common/autotest_common.sh@861 -- # break 00:06:57.056 21:10:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:57.056 21:10:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:57.056 21:10:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.056 1+0 records in 00:06:57.056 1+0 records out 00:06:57.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274276 s, 14.9 MB/s 00:06:57.056 21:10:31 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:57.056 21:10:31 -- common/autotest_common.sh@874 -- # size=4096 00:06:57.056 21:10:31 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:57.056 21:10:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:57.056 21:10:31 -- common/autotest_common.sh@877 -- # return 0 00:06:57.056 21:10:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.056 21:10:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.056 21:10:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.314 /dev/nbd1 00:06:57.314 21:10:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.314 21:10:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.314 21:10:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:57.314 21:10:32 -- common/autotest_common.sh@857 -- # local i 00:06:57.314 21:10:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:57.314 21:10:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:57.314 21:10:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:57.314 21:10:32 -- common/autotest_common.sh@861 -- # break 00:06:57.314 21:10:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:57.314 21:10:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:57.314 21:10:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.314 1+0 records in 00:06:57.314 1+0 records out 00:06:57.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154075 s, 26.6 MB/s 00:06:57.314 21:10:32 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:57.314 21:10:32 -- common/autotest_common.sh@874 -- # size=4096 00:06:57.314 21:10:32 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:57.314 21:10:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:57.314 21:10:32 -- common/autotest_common.sh@877 -- # return 0 00:06:57.314 21:10:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.314 21:10:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.314 21:10:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.314 21:10:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.314 21:10:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.573 { 00:06:57.573 "nbd_device": "/dev/nbd0", 00:06:57.573 "bdev_name": "Malloc0" 00:06:57.573 }, 00:06:57.573 { 00:06:57.573 "nbd_device": "/dev/nbd1", 00:06:57.573 "bdev_name": "Malloc1" 00:06:57.573 } 00:06:57.573 ]' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.573 { 00:06:57.573 "nbd_device": "/dev/nbd0", 00:06:57.573 "bdev_name": "Malloc0" 00:06:57.573 }, 00:06:57.573 { 00:06:57.573 "nbd_device": "/dev/nbd1", 00:06:57.573 "bdev_name": "Malloc1" 00:06:57.573 } 00:06:57.573 ]' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.573 /dev/nbd1' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.573 /dev/nbd1' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.573 256+0 records in 00:06:57.573 256+0 records out 00:06:57.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114395 s, 91.7 MB/s 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.573 256+0 records in 00:06:57.573 256+0 records out 00:06:57.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198343 s, 52.9 MB/s 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.573 256+0 records in 00:06:57.573 256+0 records out 00:06:57.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176553 s, 59.4 MB/s 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.573 21:10:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@51 -- # local i 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.574 21:10:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@41 -- # break 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.833 21:10:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@41 -- # break 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@65 -- # true 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.092 21:10:32 -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.092 21:10:32 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.351 21:10:33 -- event/event.sh@35 -- # sleep 3 00:06:58.611 [2024-07-26 21:10:33.300149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.611 [2024-07-26 21:10:33.332641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.611 [2024-07-26 21:10:33.332669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.611 [2024-07-26 21:10:33.373954] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.611 [2024-07-26 21:10:33.373996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.903 21:10:36 -- event/event.sh@38 -- # waitforlisten 1509833 /var/tmp/spdk-nbd.sock 00:07:01.903 21:10:36 -- common/autotest_common.sh@819 -- # '[' -z 1509833 ']' 00:07:01.903 21:10:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.903 21:10:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:01.903 21:10:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.903 21:10:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:01.903 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.903 21:10:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:01.903 21:10:36 -- common/autotest_common.sh@852 -- # return 0 00:07:01.903 21:10:36 -- event/event.sh@39 -- # killprocess 1509833 00:07:01.903 21:10:36 -- common/autotest_common.sh@926 -- # '[' -z 1509833 ']' 00:07:01.903 21:10:36 -- common/autotest_common.sh@930 -- # kill -0 1509833 00:07:01.903 21:10:36 -- common/autotest_common.sh@931 -- # uname 00:07:01.903 21:10:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:01.903 21:10:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1509833 00:07:01.903 21:10:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:01.903 21:10:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:01.903 21:10:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1509833' 00:07:01.903 killing process with pid 1509833 00:07:01.903 21:10:36 -- common/autotest_common.sh@945 -- # kill 1509833 00:07:01.903 21:10:36 -- common/autotest_common.sh@950 -- # wait 1509833 00:07:01.903 spdk_app_start is called in Round 0. 00:07:01.903 Shutdown signal received, stop current app iteration 00:07:01.903 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:07:01.903 spdk_app_start is called in Round 1. 00:07:01.903 Shutdown signal received, stop current app iteration 00:07:01.903 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:07:01.903 spdk_app_start is called in Round 2. 00:07:01.903 Shutdown signal received, stop current app iteration 00:07:01.903 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:07:01.903 spdk_app_start is called in Round 3. 00:07:01.903 Shutdown signal received, stop current app iteration 00:07:01.903 21:10:36 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:01.903 21:10:36 -- event/event.sh@42 -- # return 0 00:07:01.903 00:07:01.903 real 0m16.084s 00:07:01.903 user 0m34.175s 00:07:01.903 sys 0m2.989s 00:07:01.903 21:10:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.903 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.903 ************************************ 00:07:01.903 END TEST app_repeat 00:07:01.903 ************************************ 00:07:01.903 21:10:36 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:01.903 21:10:36 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.903 21:10:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.903 21:10:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.903 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.903 ************************************ 00:07:01.903 START TEST cpu_locks 00:07:01.903 ************************************ 00:07:01.903 21:10:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.903 * Looking for test storage... 00:07:01.903 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:01.903 21:10:36 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:01.903 21:10:36 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:01.903 21:10:36 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:01.903 21:10:36 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:01.903 21:10:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.903 21:10:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.903 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.903 ************************************ 00:07:01.903 START TEST default_locks 00:07:01.903 ************************************ 00:07:01.903 21:10:36 -- common/autotest_common.sh@1104 -- # default_locks 00:07:01.903 21:10:36 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1513024 00:07:01.903 21:10:36 -- event/cpu_locks.sh@47 -- # waitforlisten 1513024 00:07:01.903 21:10:36 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.903 21:10:36 -- common/autotest_common.sh@819 -- # '[' -z 1513024 ']' 00:07:01.903 21:10:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.903 21:10:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:01.903 21:10:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.903 21:10:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:01.903 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.903 [2024-07-26 21:10:36.690593] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:01.903 [2024-07-26 21:10:36.690658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513024 ] 00:07:01.903 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.162 [2024-07-26 21:10:36.775985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.162 [2024-07-26 21:10:36.813611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.162 [2024-07-26 21:10:36.813730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.730 21:10:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:02.730 21:10:37 -- common/autotest_common.sh@852 -- # return 0 00:07:02.731 21:10:37 -- event/cpu_locks.sh@49 -- # locks_exist 1513024 00:07:02.731 21:10:37 -- event/cpu_locks.sh@22 -- # lslocks -p 1513024 00:07:02.731 21:10:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.299 lslocks: write error 00:07:03.299 21:10:38 -- event/cpu_locks.sh@50 -- # killprocess 1513024 00:07:03.299 21:10:38 -- common/autotest_common.sh@926 -- # '[' -z 1513024 ']' 00:07:03.299 21:10:38 -- common/autotest_common.sh@930 -- # kill -0 1513024 00:07:03.299 21:10:38 -- common/autotest_common.sh@931 -- # uname 00:07:03.299 21:10:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:03.299 21:10:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1513024 00:07:03.299 21:10:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:03.299 21:10:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:03.299 21:10:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1513024' 00:07:03.299 killing process with pid 1513024 00:07:03.299 21:10:38 -- common/autotest_common.sh@945 -- # kill 1513024 00:07:03.299 21:10:38 -- common/autotest_common.sh@950 -- # wait 1513024 00:07:03.558 21:10:38 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1513024 00:07:03.558 21:10:38 -- common/autotest_common.sh@640 -- # local es=0 00:07:03.558 21:10:38 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1513024 00:07:03.558 21:10:38 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:03.558 21:10:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:03.558 21:10:38 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:03.558 21:10:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:03.558 21:10:38 -- common/autotest_common.sh@643 -- # waitforlisten 1513024 00:07:03.558 21:10:38 -- common/autotest_common.sh@819 -- # '[' -z 1513024 ']' 00:07:03.558 21:10:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.558 21:10:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:03.558 21:10:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.558 21:10:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:03.558 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:07:03.558 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1513024) - No such process 00:07:03.558 ERROR: process (pid: 1513024) is no longer running 00:07:03.558 21:10:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:03.558 21:10:38 -- common/autotest_common.sh@852 -- # return 1 00:07:03.558 21:10:38 -- common/autotest_common.sh@643 -- # es=1 00:07:03.558 21:10:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:03.558 21:10:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:03.558 21:10:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:03.558 21:10:38 -- event/cpu_locks.sh@54 -- # no_locks 00:07:03.558 21:10:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.558 21:10:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.558 21:10:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.558 00:07:03.558 real 0m1.765s 00:07:03.558 user 0m1.816s 00:07:03.558 sys 0m0.638s 00:07:03.558 21:10:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.558 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:07:03.558 ************************************ 00:07:03.558 END TEST default_locks 00:07:03.558 ************************************ 00:07:03.818 21:10:38 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:03.818 21:10:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:03.818 21:10:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.818 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:07:03.818 ************************************ 00:07:03.818 START TEST default_locks_via_rpc 00:07:03.818 ************************************ 00:07:03.818 21:10:38 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:07:03.818 21:10:38 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1513320 00:07:03.818 21:10:38 -- event/cpu_locks.sh@63 -- # waitforlisten 1513320 00:07:03.818 21:10:38 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.818 21:10:38 -- common/autotest_common.sh@819 -- # '[' -z 1513320 ']' 00:07:03.818 21:10:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.818 21:10:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:03.818 21:10:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.818 21:10:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:03.818 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:07:03.818 [2024-07-26 21:10:38.499708] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:03.818 [2024-07-26 21:10:38.499761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513320 ] 00:07:03.818 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.818 [2024-07-26 21:10:38.583515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.818 [2024-07-26 21:10:38.620679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.818 [2024-07-26 21:10:38.620796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.755 21:10:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:04.755 21:10:39 -- common/autotest_common.sh@852 -- # return 0 00:07:04.755 21:10:39 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:04.755 21:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.755 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.755 21:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.755 21:10:39 -- event/cpu_locks.sh@67 -- # no_locks 00:07:04.755 21:10:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.755 21:10:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.755 21:10:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.755 21:10:39 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.755 21:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.755 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:07:04.755 21:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.755 21:10:39 -- event/cpu_locks.sh@71 -- # locks_exist 1513320 00:07:04.755 21:10:39 -- event/cpu_locks.sh@22 -- # lslocks -p 1513320 00:07:04.755 21:10:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.323 21:10:39 -- event/cpu_locks.sh@73 -- # killprocess 1513320 00:07:05.323 21:10:39 -- common/autotest_common.sh@926 -- # '[' -z 1513320 ']' 00:07:05.323 21:10:39 -- common/autotest_common.sh@930 -- # kill -0 1513320 00:07:05.323 21:10:39 -- common/autotest_common.sh@931 -- # uname 00:07:05.323 21:10:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:05.323 21:10:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1513320 00:07:05.323 21:10:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:05.323 21:10:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:05.323 21:10:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1513320' 00:07:05.323 killing process with pid 1513320 00:07:05.323 21:10:39 -- common/autotest_common.sh@945 -- # kill 1513320 00:07:05.323 21:10:39 -- common/autotest_common.sh@950 -- # wait 1513320 00:07:05.581 00:07:05.581 real 0m1.814s 00:07:05.581 user 0m1.866s 00:07:05.581 sys 0m0.648s 00:07:05.581 21:10:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.581 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:05.581 ************************************ 00:07:05.581 END TEST default_locks_via_rpc 00:07:05.581 ************************************ 00:07:05.581 21:10:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:05.581 21:10:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:05.581 21:10:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.581 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:05.581 ************************************ 00:07:05.581 START TEST non_locking_app_on_locked_coremask 00:07:05.581 ************************************ 00:07:05.581 21:10:40 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:07:05.581 21:10:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1513627 00:07:05.581 21:10:40 -- event/cpu_locks.sh@81 -- # waitforlisten 1513627 /var/tmp/spdk.sock 00:07:05.581 21:10:40 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.581 21:10:40 -- common/autotest_common.sh@819 -- # '[' -z 1513627 ']' 00:07:05.581 21:10:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.581 21:10:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:05.581 21:10:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.581 21:10:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:05.581 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:05.581 [2024-07-26 21:10:40.358958] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:05.581 [2024-07-26 21:10:40.359011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513627 ] 00:07:05.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.581 [2024-07-26 21:10:40.443908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.840 [2024-07-26 21:10:40.480236] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.840 [2024-07-26 21:10:40.480357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.408 21:10:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:06.408 21:10:41 -- common/autotest_common.sh@852 -- # return 0 00:07:06.408 21:10:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1513891 00:07:06.408 21:10:41 -- event/cpu_locks.sh@85 -- # waitforlisten 1513891 /var/tmp/spdk2.sock 00:07:06.408 21:10:41 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.408 21:10:41 -- common/autotest_common.sh@819 -- # '[' -z 1513891 ']' 00:07:06.408 21:10:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.408 21:10:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:06.408 21:10:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.408 21:10:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:06.408 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.408 [2024-07-26 21:10:41.193201] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:06.408 [2024-07-26 21:10:41.193255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513891 ] 00:07:06.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.667 [2024-07-26 21:10:41.311123] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.667 [2024-07-26 21:10:41.311154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.667 [2024-07-26 21:10:41.383608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.667 [2024-07-26 21:10:41.383753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.263 21:10:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:07.263 21:10:41 -- common/autotest_common.sh@852 -- # return 0 00:07:07.263 21:10:41 -- event/cpu_locks.sh@87 -- # locks_exist 1513627 00:07:07.263 21:10:41 -- event/cpu_locks.sh@22 -- # lslocks -p 1513627 00:07:07.263 21:10:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.838 lslocks: write error 00:07:07.838 21:10:42 -- event/cpu_locks.sh@89 -- # killprocess 1513627 00:07:07.838 21:10:42 -- common/autotest_common.sh@926 -- # '[' -z 1513627 ']' 00:07:07.838 21:10:42 -- common/autotest_common.sh@930 -- # kill -0 1513627 00:07:07.838 21:10:42 -- common/autotest_common.sh@931 -- # uname 00:07:07.838 21:10:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:07.838 21:10:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1513627 00:07:08.096 21:10:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:08.097 21:10:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:08.097 21:10:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1513627' 00:07:08.097 killing process with pid 1513627 00:07:08.097 21:10:42 -- common/autotest_common.sh@945 -- # kill 1513627 00:07:08.097 21:10:42 -- common/autotest_common.sh@950 -- # wait 1513627 00:07:08.664 21:10:43 -- event/cpu_locks.sh@90 -- # killprocess 1513891 00:07:08.664 21:10:43 -- common/autotest_common.sh@926 -- # '[' -z 1513891 ']' 00:07:08.664 21:10:43 -- common/autotest_common.sh@930 -- # kill -0 1513891 00:07:08.664 21:10:43 -- common/autotest_common.sh@931 -- # uname 00:07:08.664 21:10:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:08.664 21:10:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1513891 00:07:08.664 21:10:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:08.664 21:10:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:08.664 21:10:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1513891' 00:07:08.664 killing process with pid 1513891 00:07:08.664 21:10:43 -- common/autotest_common.sh@945 -- # kill 1513891 00:07:08.664 21:10:43 -- common/autotest_common.sh@950 -- # wait 1513891 00:07:08.922 00:07:08.922 real 0m3.379s 00:07:08.922 user 0m3.567s 00:07:08.922 sys 0m1.140s 00:07:08.922 21:10:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.922 21:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:08.922 ************************************ 00:07:08.922 END TEST non_locking_app_on_locked_coremask 00:07:08.922 ************************************ 00:07:08.922 21:10:43 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:08.922 21:10:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.922 21:10:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.922 21:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:08.922 ************************************ 00:07:08.922 START TEST locking_app_on_unlocked_coremask 00:07:08.922 ************************************ 00:07:08.922 21:10:43 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:07:08.922 21:10:43 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1514220 00:07:08.922 21:10:43 -- event/cpu_locks.sh@99 -- # waitforlisten 1514220 /var/tmp/spdk.sock 00:07:08.922 21:10:43 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:08.922 21:10:43 -- common/autotest_common.sh@819 -- # '[' -z 1514220 ']' 00:07:08.922 21:10:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.922 21:10:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:08.922 21:10:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.922 21:10:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:08.922 21:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:08.922 [2024-07-26 21:10:43.780486] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:08.922 [2024-07-26 21:10:43.780542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514220 ] 00:07:09.180 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.180 [2024-07-26 21:10:43.865743] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.180 [2024-07-26 21:10:43.865772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.180 [2024-07-26 21:10:43.903374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.180 [2024-07-26 21:10:43.903499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.747 21:10:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:09.747 21:10:44 -- common/autotest_common.sh@852 -- # return 0 00:07:09.747 21:10:44 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1514477 00:07:09.747 21:10:44 -- event/cpu_locks.sh@103 -- # waitforlisten 1514477 /var/tmp/spdk2.sock 00:07:09.747 21:10:44 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.747 21:10:44 -- common/autotest_common.sh@819 -- # '[' -z 1514477 ']' 00:07:09.747 21:10:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.747 21:10:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:09.747 21:10:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.747 21:10:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:09.747 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.006 [2024-07-26 21:10:44.625356] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:10.006 [2024-07-26 21:10:44.625410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514477 ] 00:07:10.006 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.006 [2024-07-26 21:10:44.743471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.006 [2024-07-26 21:10:44.815500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.006 [2024-07-26 21:10:44.815608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.574 21:10:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:10.574 21:10:45 -- common/autotest_common.sh@852 -- # return 0 00:07:10.574 21:10:45 -- event/cpu_locks.sh@105 -- # locks_exist 1514477 00:07:10.574 21:10:45 -- event/cpu_locks.sh@22 -- # lslocks -p 1514477 00:07:10.574 21:10:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.512 lslocks: write error 00:07:11.512 21:10:46 -- event/cpu_locks.sh@107 -- # killprocess 1514220 00:07:11.512 21:10:46 -- common/autotest_common.sh@926 -- # '[' -z 1514220 ']' 00:07:11.512 21:10:46 -- common/autotest_common.sh@930 -- # kill -0 1514220 00:07:11.512 21:10:46 -- common/autotest_common.sh@931 -- # uname 00:07:11.512 21:10:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:11.512 21:10:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1514220 00:07:11.512 21:10:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:11.512 21:10:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:11.512 21:10:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1514220' 00:07:11.512 killing process with pid 1514220 00:07:11.512 21:10:46 -- common/autotest_common.sh@945 -- # kill 1514220 00:07:11.512 21:10:46 -- common/autotest_common.sh@950 -- # wait 1514220 00:07:12.449 21:10:46 -- event/cpu_locks.sh@108 -- # killprocess 1514477 00:07:12.450 21:10:46 -- common/autotest_common.sh@926 -- # '[' -z 1514477 ']' 00:07:12.450 21:10:46 -- common/autotest_common.sh@930 -- # kill -0 1514477 00:07:12.450 21:10:46 -- common/autotest_common.sh@931 -- # uname 00:07:12.450 21:10:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:12.450 21:10:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1514477 00:07:12.450 21:10:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:12.450 21:10:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:12.450 21:10:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1514477' 00:07:12.450 killing process with pid 1514477 00:07:12.450 21:10:47 -- common/autotest_common.sh@945 -- # kill 1514477 00:07:12.450 21:10:47 -- common/autotest_common.sh@950 -- # wait 1514477 00:07:12.450 00:07:12.450 real 0m3.583s 00:07:12.450 user 0m3.794s 00:07:12.450 sys 0m1.217s 00:07:12.450 21:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.450 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:07:12.450 ************************************ 00:07:12.450 END TEST locking_app_on_unlocked_coremask 00:07:12.450 ************************************ 00:07:12.709 21:10:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:12.709 21:10:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.709 21:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.709 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:07:12.709 ************************************ 00:07:12.709 START TEST locking_app_on_locked_coremask 00:07:12.709 ************************************ 00:07:12.709 21:10:47 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:12.709 21:10:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1515060 00:07:12.709 21:10:47 -- event/cpu_locks.sh@116 -- # waitforlisten 1515060 /var/tmp/spdk.sock 00:07:12.709 21:10:47 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.709 21:10:47 -- common/autotest_common.sh@819 -- # '[' -z 1515060 ']' 00:07:12.709 21:10:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.709 21:10:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:12.709 21:10:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.709 21:10:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:12.709 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:07:12.709 [2024-07-26 21:10:47.413114] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:12.709 [2024-07-26 21:10:47.413170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515060 ] 00:07:12.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.709 [2024-07-26 21:10:47.495536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.709 [2024-07-26 21:10:47.529500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.709 [2024-07-26 21:10:47.529621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.646 21:10:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:13.646 21:10:48 -- common/autotest_common.sh@852 -- # return 0 00:07:13.646 21:10:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1515080 00:07:13.646 21:10:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1515080 /var/tmp/spdk2.sock 00:07:13.646 21:10:48 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.646 21:10:48 -- common/autotest_common.sh@640 -- # local es=0 00:07:13.646 21:10:48 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1515080 /var/tmp/spdk2.sock 00:07:13.646 21:10:48 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:13.646 21:10:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:13.646 21:10:48 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:13.646 21:10:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:13.646 21:10:48 -- common/autotest_common.sh@643 -- # waitforlisten 1515080 /var/tmp/spdk2.sock 00:07:13.646 21:10:48 -- common/autotest_common.sh@819 -- # '[' -z 1515080 ']' 00:07:13.646 21:10:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.646 21:10:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:13.646 21:10:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.646 21:10:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:13.646 21:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:13.646 [2024-07-26 21:10:48.242207] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:13.646 [2024-07-26 21:10:48.242261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515080 ] 00:07:13.646 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.646 [2024-07-26 21:10:48.360046] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1515060 has claimed it. 00:07:13.646 [2024-07-26 21:10:48.360088] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:14.214 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1515080) - No such process 00:07:14.214 ERROR: process (pid: 1515080) is no longer running 00:07:14.214 21:10:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.214 21:10:48 -- common/autotest_common.sh@852 -- # return 1 00:07:14.214 21:10:48 -- common/autotest_common.sh@643 -- # es=1 00:07:14.214 21:10:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:14.214 21:10:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:14.214 21:10:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:14.214 21:10:48 -- event/cpu_locks.sh@122 -- # locks_exist 1515060 00:07:14.214 21:10:48 -- event/cpu_locks.sh@22 -- # lslocks -p 1515060 00:07:14.214 21:10:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.782 lslocks: write error 00:07:14.782 21:10:49 -- event/cpu_locks.sh@124 -- # killprocess 1515060 00:07:14.782 21:10:49 -- common/autotest_common.sh@926 -- # '[' -z 1515060 ']' 00:07:14.782 21:10:49 -- common/autotest_common.sh@930 -- # kill -0 1515060 00:07:14.782 21:10:49 -- common/autotest_common.sh@931 -- # uname 00:07:14.782 21:10:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:14.782 21:10:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1515060 00:07:14.782 21:10:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:14.783 21:10:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:14.783 21:10:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1515060' 00:07:14.783 killing process with pid 1515060 00:07:14.783 21:10:49 -- common/autotest_common.sh@945 -- # kill 1515060 00:07:14.783 21:10:49 -- common/autotest_common.sh@950 -- # wait 1515060 00:07:15.351 00:07:15.351 real 0m2.566s 00:07:15.351 user 0m2.808s 00:07:15.351 sys 0m0.821s 00:07:15.351 21:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.351 21:10:49 -- common/autotest_common.sh@10 -- # set +x 00:07:15.351 ************************************ 00:07:15.351 END TEST locking_app_on_locked_coremask 00:07:15.351 ************************************ 00:07:15.351 21:10:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:15.351 21:10:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.351 21:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.351 21:10:49 -- common/autotest_common.sh@10 -- # set +x 00:07:15.351 ************************************ 00:07:15.351 START TEST locking_overlapped_coremask 00:07:15.351 ************************************ 00:07:15.351 21:10:49 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:15.351 21:10:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1515412 00:07:15.351 21:10:49 -- event/cpu_locks.sh@133 -- # waitforlisten 1515412 /var/tmp/spdk.sock 00:07:15.351 21:10:49 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:15.351 21:10:49 -- common/autotest_common.sh@819 -- # '[' -z 1515412 ']' 00:07:15.351 21:10:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.351 21:10:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:15.351 21:10:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.351 21:10:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:15.351 21:10:49 -- common/autotest_common.sh@10 -- # set +x 00:07:15.351 [2024-07-26 21:10:50.029504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:15.351 [2024-07-26 21:10:50.029567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515412 ] 00:07:15.351 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.351 [2024-07-26 21:10:50.117678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.351 [2024-07-26 21:10:50.154832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.351 [2024-07-26 21:10:50.155059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.351 [2024-07-26 21:10:50.155138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.351 [2024-07-26 21:10:50.155140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.287 21:10:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.287 21:10:50 -- common/autotest_common.sh@852 -- # return 0 00:07:16.287 21:10:50 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:16.287 21:10:50 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1515644 00:07:16.287 21:10:50 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1515644 /var/tmp/spdk2.sock 00:07:16.287 21:10:50 -- common/autotest_common.sh@640 -- # local es=0 00:07:16.287 21:10:50 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1515644 /var/tmp/spdk2.sock 00:07:16.287 21:10:50 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:16.287 21:10:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.287 21:10:50 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:16.287 21:10:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.287 21:10:50 -- common/autotest_common.sh@643 -- # waitforlisten 1515644 /var/tmp/spdk2.sock 00:07:16.287 21:10:50 -- common/autotest_common.sh@819 -- # '[' -z 1515644 ']' 00:07:16.287 21:10:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.287 21:10:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.287 21:10:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.287 21:10:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.287 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:07:16.287 [2024-07-26 21:10:50.852194] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:16.287 [2024-07-26 21:10:50.852249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515644 ] 00:07:16.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.287 [2024-07-26 21:10:50.971156] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1515412 has claimed it. 00:07:16.287 [2024-07-26 21:10:50.971209] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.855 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1515644) - No such process 00:07:16.855 ERROR: process (pid: 1515644) is no longer running 00:07:16.855 21:10:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.855 21:10:51 -- common/autotest_common.sh@852 -- # return 1 00:07:16.855 21:10:51 -- common/autotest_common.sh@643 -- # es=1 00:07:16.855 21:10:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:16.855 21:10:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:16.855 21:10:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:16.855 21:10:51 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:16.855 21:10:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.855 21:10:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.855 21:10:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.855 21:10:51 -- event/cpu_locks.sh@141 -- # killprocess 1515412 00:07:16.855 21:10:51 -- common/autotest_common.sh@926 -- # '[' -z 1515412 ']' 00:07:16.855 21:10:51 -- common/autotest_common.sh@930 -- # kill -0 1515412 00:07:16.855 21:10:51 -- common/autotest_common.sh@931 -- # uname 00:07:16.855 21:10:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:16.855 21:10:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1515412 00:07:16.855 21:10:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:16.855 21:10:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:16.855 21:10:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1515412' 00:07:16.855 killing process with pid 1515412 00:07:16.855 21:10:51 -- common/autotest_common.sh@945 -- # kill 1515412 00:07:16.855 21:10:51 -- common/autotest_common.sh@950 -- # wait 1515412 00:07:17.114 00:07:17.114 real 0m1.863s 00:07:17.114 user 0m5.204s 00:07:17.114 sys 0m0.485s 00:07:17.114 21:10:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.114 21:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.114 ************************************ 00:07:17.114 END TEST locking_overlapped_coremask 00:07:17.114 ************************************ 00:07:17.114 21:10:51 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:17.114 21:10:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.114 21:10:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.114 21:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.114 ************************************ 00:07:17.114 START TEST locking_overlapped_coremask_via_rpc 00:07:17.114 ************************************ 00:07:17.114 21:10:51 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:17.114 21:10:51 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1515911 00:07:17.114 21:10:51 -- event/cpu_locks.sh@149 -- # waitforlisten 1515911 /var/tmp/spdk.sock 00:07:17.114 21:10:51 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:17.114 21:10:51 -- common/autotest_common.sh@819 -- # '[' -z 1515911 ']' 00:07:17.114 21:10:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.114 21:10:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.114 21:10:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.114 21:10:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.114 21:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.114 [2024-07-26 21:10:51.941138] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:17.114 [2024-07-26 21:10:51.941199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515911 ] 00:07:17.114 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.373 [2024-07-26 21:10:52.024038] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.373 [2024-07-26 21:10:52.024066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.373 [2024-07-26 21:10:52.060306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.373 [2024-07-26 21:10:52.060539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.373 [2024-07-26 21:10:52.060641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.373 [2024-07-26 21:10:52.060644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.941 21:10:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:17.941 21:10:52 -- common/autotest_common.sh@852 -- # return 0 00:07:17.941 21:10:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1515955 00:07:17.941 21:10:52 -- event/cpu_locks.sh@153 -- # waitforlisten 1515955 /var/tmp/spdk2.sock 00:07:17.941 21:10:52 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:17.941 21:10:52 -- common/autotest_common.sh@819 -- # '[' -z 1515955 ']' 00:07:17.941 21:10:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.941 21:10:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.941 21:10:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.941 21:10:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.941 21:10:52 -- common/autotest_common.sh@10 -- # set +x 00:07:17.941 [2024-07-26 21:10:52.776812] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:17.941 [2024-07-26 21:10:52.776866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515955 ] 00:07:18.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.200 [2024-07-26 21:10:52.899112] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.200 [2024-07-26 21:10:52.899141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.200 [2024-07-26 21:10:52.973334] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.200 [2024-07-26 21:10:52.973509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.200 [2024-07-26 21:10:52.973604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.200 [2024-07-26 21:10:52.973606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.767 21:10:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.767 21:10:53 -- common/autotest_common.sh@852 -- # return 0 00:07:18.767 21:10:53 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.767 21:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.767 21:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:18.767 21:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.767 21:10:53 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.767 21:10:53 -- common/autotest_common.sh@640 -- # local es=0 00:07:18.767 21:10:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.767 21:10:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:18.767 21:10:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.767 21:10:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:18.767 21:10:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.767 21:10:53 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.767 21:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.767 21:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:18.767 [2024-07-26 21:10:53.580690] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1515911 has claimed it. 00:07:18.767 request: 00:07:18.767 { 00:07:18.767 "method": "framework_enable_cpumask_locks", 00:07:18.767 "req_id": 1 00:07:18.767 } 00:07:18.767 Got JSON-RPC error response 00:07:18.767 response: 00:07:18.767 { 00:07:18.767 "code": -32603, 00:07:18.767 "message": "Failed to claim CPU core: 2" 00:07:18.767 } 00:07:18.767 21:10:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:18.767 21:10:53 -- common/autotest_common.sh@643 -- # es=1 00:07:18.767 21:10:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:18.767 21:10:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:18.767 21:10:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:18.767 21:10:53 -- event/cpu_locks.sh@158 -- # waitforlisten 1515911 /var/tmp/spdk.sock 00:07:18.767 21:10:53 -- common/autotest_common.sh@819 -- # '[' -z 1515911 ']' 00:07:18.767 21:10:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.767 21:10:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.767 21:10:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.767 21:10:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.767 21:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.026 21:10:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.026 21:10:53 -- common/autotest_common.sh@852 -- # return 0 00:07:19.026 21:10:53 -- event/cpu_locks.sh@159 -- # waitforlisten 1515955 /var/tmp/spdk2.sock 00:07:19.026 21:10:53 -- common/autotest_common.sh@819 -- # '[' -z 1515955 ']' 00:07:19.026 21:10:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.026 21:10:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:19.026 21:10:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.026 21:10:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:19.026 21:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.285 21:10:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.285 21:10:53 -- common/autotest_common.sh@852 -- # return 0 00:07:19.285 21:10:53 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:19.285 21:10:53 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.285 21:10:53 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.285 21:10:53 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.285 00:07:19.285 real 0m2.065s 00:07:19.285 user 0m0.767s 00:07:19.285 sys 0m0.227s 00:07:19.285 21:10:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.285 21:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.285 ************************************ 00:07:19.285 END TEST locking_overlapped_coremask_via_rpc 00:07:19.285 ************************************ 00:07:19.285 21:10:53 -- event/cpu_locks.sh@174 -- # cleanup 00:07:19.285 21:10:53 -- event/cpu_locks.sh@15 -- # [[ -z 1515911 ]] 00:07:19.285 21:10:53 -- event/cpu_locks.sh@15 -- # killprocess 1515911 00:07:19.285 21:10:53 -- common/autotest_common.sh@926 -- # '[' -z 1515911 ']' 00:07:19.285 21:10:53 -- common/autotest_common.sh@930 -- # kill -0 1515911 00:07:19.285 21:10:54 -- common/autotest_common.sh@931 -- # uname 00:07:19.285 21:10:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:19.285 21:10:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1515911 00:07:19.285 21:10:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:19.285 21:10:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:19.285 21:10:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1515911' 00:07:19.285 killing process with pid 1515911 00:07:19.285 21:10:54 -- common/autotest_common.sh@945 -- # kill 1515911 00:07:19.285 21:10:54 -- common/autotest_common.sh@950 -- # wait 1515911 00:07:19.543 21:10:54 -- event/cpu_locks.sh@16 -- # [[ -z 1515955 ]] 00:07:19.543 21:10:54 -- event/cpu_locks.sh@16 -- # killprocess 1515955 00:07:19.543 21:10:54 -- common/autotest_common.sh@926 -- # '[' -z 1515955 ']' 00:07:19.543 21:10:54 -- common/autotest_common.sh@930 -- # kill -0 1515955 00:07:19.543 21:10:54 -- common/autotest_common.sh@931 -- # uname 00:07:19.543 21:10:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:19.543 21:10:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1515955 00:07:19.801 21:10:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:19.801 21:10:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:19.801 21:10:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1515955' 00:07:19.801 killing process with pid 1515955 00:07:19.801 21:10:54 -- common/autotest_common.sh@945 -- # kill 1515955 00:07:19.801 21:10:54 -- common/autotest_common.sh@950 -- # wait 1515955 00:07:20.060 21:10:54 -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.060 21:10:54 -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.060 21:10:54 -- event/cpu_locks.sh@15 -- # [[ -z 1515911 ]] 00:07:20.060 21:10:54 -- event/cpu_locks.sh@15 -- # killprocess 1515911 00:07:20.060 21:10:54 -- common/autotest_common.sh@926 -- # '[' -z 1515911 ']' 00:07:20.060 21:10:54 -- common/autotest_common.sh@930 -- # kill -0 1515911 00:07:20.060 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1515911) - No such process 00:07:20.060 21:10:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1515911 is not found' 00:07:20.060 Process with pid 1515911 is not found 00:07:20.060 21:10:54 -- event/cpu_locks.sh@16 -- # [[ -z 1515955 ]] 00:07:20.060 21:10:54 -- event/cpu_locks.sh@16 -- # killprocess 1515955 00:07:20.060 21:10:54 -- common/autotest_common.sh@926 -- # '[' -z 1515955 ']' 00:07:20.060 21:10:54 -- common/autotest_common.sh@930 -- # kill -0 1515955 00:07:20.060 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1515955) - No such process 00:07:20.060 21:10:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1515955 is not found' 00:07:20.060 Process with pid 1515955 is not found 00:07:20.060 21:10:54 -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.060 00:07:20.060 real 0m18.180s 00:07:20.060 user 0m30.233s 00:07:20.060 sys 0m6.104s 00:07:20.060 21:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.061 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.061 ************************************ 00:07:20.061 END TEST cpu_locks 00:07:20.061 ************************************ 00:07:20.061 00:07:20.061 real 0m43.291s 00:07:20.061 user 1m21.217s 00:07:20.061 sys 0m10.138s 00:07:20.061 21:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.061 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.061 ************************************ 00:07:20.061 END TEST event 00:07:20.061 ************************************ 00:07:20.061 21:10:54 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:20.061 21:10:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.061 21:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.061 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.061 ************************************ 00:07:20.061 START TEST thread 00:07:20.061 ************************************ 00:07:20.061 21:10:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:20.061 * Looking for test storage... 00:07:20.061 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:20.061 21:10:54 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.061 21:10:54 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:20.061 21:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.061 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.061 ************************************ 00:07:20.061 START TEST thread_poller_perf 00:07:20.061 ************************************ 00:07:20.061 21:10:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.320 [2024-07-26 21:10:54.937902] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:20.320 [2024-07-26 21:10:54.937993] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516526 ] 00:07:20.320 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.320 [2024-07-26 21:10:55.027438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.320 [2024-07-26 21:10:55.063803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.320 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:21.256 ====================================== 00:07:21.256 busy:2510023360 (cyc) 00:07:21.256 total_run_count: 417000 00:07:21.256 tsc_hz: 2500000000 (cyc) 00:07:21.256 ====================================== 00:07:21.256 poller_cost: 6019 (cyc), 2407 (nsec) 00:07:21.515 00:07:21.515 real 0m1.212s 00:07:21.515 user 0m1.103s 00:07:21.515 sys 0m0.106s 00:07:21.515 21:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.515 21:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:21.515 ************************************ 00:07:21.515 END TEST thread_poller_perf 00:07:21.515 ************************************ 00:07:21.515 21:10:56 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.515 21:10:56 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:21.515 21:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.515 21:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:21.515 ************************************ 00:07:21.515 START TEST thread_poller_perf 00:07:21.515 ************************************ 00:07:21.515 21:10:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.515 [2024-07-26 21:10:56.201124] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:21.515 [2024-07-26 21:10:56.201216] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516703 ] 00:07:21.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.515 [2024-07-26 21:10:56.289436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.515 [2024-07-26 21:10:56.325500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.515 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:22.894 ====================================== 00:07:22.894 busy:2502397718 (cyc) 00:07:22.894 total_run_count: 5696000 00:07:22.895 tsc_hz: 2500000000 (cyc) 00:07:22.895 ====================================== 00:07:22.895 poller_cost: 439 (cyc), 175 (nsec) 00:07:22.895 00:07:22.895 real 0m1.202s 00:07:22.895 user 0m1.096s 00:07:22.895 sys 0m0.102s 00:07:22.895 21:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.895 21:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:22.895 ************************************ 00:07:22.895 END TEST thread_poller_perf 00:07:22.895 ************************************ 00:07:22.895 21:10:57 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:22.895 00:07:22.895 real 0m2.602s 00:07:22.895 user 0m2.269s 00:07:22.895 sys 0m0.346s 00:07:22.895 21:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.895 21:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:22.895 ************************************ 00:07:22.895 END TEST thread 00:07:22.895 ************************************ 00:07:22.895 21:10:57 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:22.895 21:10:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:22.895 21:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.895 21:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:22.895 ************************************ 00:07:22.895 START TEST accel 00:07:22.895 ************************************ 00:07:22.895 21:10:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:22.895 * Looking for test storage... 00:07:22.895 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:22.895 21:10:57 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:22.895 21:10:57 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:22.895 21:10:57 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.895 21:10:57 -- accel/accel.sh@59 -- # spdk_tgt_pid=1516951 00:07:22.895 21:10:57 -- accel/accel.sh@60 -- # waitforlisten 1516951 00:07:22.895 21:10:57 -- common/autotest_common.sh@819 -- # '[' -z 1516951 ']' 00:07:22.895 21:10:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.895 21:10:57 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:22.895 21:10:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:22.895 21:10:57 -- accel/accel.sh@58 -- # build_accel_config 00:07:22.895 21:10:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.895 21:10:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.895 21:10:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:22.895 21:10:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.895 21:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:22.895 21:10:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.895 21:10:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.895 21:10:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.895 21:10:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.895 21:10:57 -- accel/accel.sh@42 -- # jq -r . 00:07:22.895 [2024-07-26 21:10:57.605864] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:22.895 [2024-07-26 21:10:57.605920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516951 ] 00:07:22.895 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.895 [2024-07-26 21:10:57.689286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.895 [2024-07-26 21:10:57.725916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.895 [2024-07-26 21:10:57.726043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.832 21:10:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.832 21:10:58 -- common/autotest_common.sh@852 -- # return 0 00:07:23.832 21:10:58 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:23.832 21:10:58 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:23.832 21:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.832 21:10:58 -- common/autotest_common.sh@10 -- # set +x 00:07:23.832 21:10:58 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:23.832 21:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.832 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.832 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.832 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.832 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.832 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.832 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.832 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.832 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.832 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.832 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.832 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.832 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # IFS== 00:07:23.833 21:10:58 -- accel/accel.sh@64 -- # read -r opc module 00:07:23.833 21:10:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:23.833 21:10:58 -- accel/accel.sh@67 -- # killprocess 1516951 00:07:23.833 21:10:58 -- common/autotest_common.sh@926 -- # '[' -z 1516951 ']' 00:07:23.833 21:10:58 -- common/autotest_common.sh@930 -- # kill -0 1516951 00:07:23.833 21:10:58 -- common/autotest_common.sh@931 -- # uname 00:07:23.833 21:10:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:23.833 21:10:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1516951 00:07:23.833 21:10:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:23.833 21:10:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:23.833 21:10:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1516951' 00:07:23.833 killing process with pid 1516951 00:07:23.833 21:10:58 -- common/autotest_common.sh@945 -- # kill 1516951 00:07:23.833 21:10:58 -- common/autotest_common.sh@950 -- # wait 1516951 00:07:24.092 21:10:58 -- accel/accel.sh@68 -- # trap - ERR 00:07:24.092 21:10:58 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:24.092 21:10:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:24.092 21:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.092 21:10:58 -- common/autotest_common.sh@10 -- # set +x 00:07:24.092 21:10:58 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:24.092 21:10:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:24.092 21:10:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.092 21:10:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.092 21:10:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.092 21:10:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.092 21:10:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.092 21:10:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.092 21:10:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.092 21:10:58 -- accel/accel.sh@42 -- # jq -r . 00:07:24.092 21:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.092 21:10:58 -- common/autotest_common.sh@10 -- # set +x 00:07:24.092 21:10:58 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:24.092 21:10:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:24.092 21:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.092 21:10:58 -- common/autotest_common.sh@10 -- # set +x 00:07:24.092 ************************************ 00:07:24.092 START TEST accel_missing_filename 00:07:24.092 ************************************ 00:07:24.092 21:10:58 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:24.092 21:10:58 -- common/autotest_common.sh@640 -- # local es=0 00:07:24.092 21:10:58 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:24.092 21:10:58 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:24.092 21:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.092 21:10:58 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:24.093 21:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.093 21:10:58 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:24.093 21:10:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.093 21:10:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:24.093 21:10:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.093 21:10:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.093 21:10:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.093 21:10:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.093 21:10:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.093 21:10:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.093 21:10:58 -- accel/accel.sh@42 -- # jq -r . 00:07:24.093 [2024-07-26 21:10:58.884996] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:24.093 [2024-07-26 21:10:58.885087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517241 ] 00:07:24.093 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.383 [2024-07-26 21:10:58.970299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.383 [2024-07-26 21:10:59.006401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.383 [2024-07-26 21:10:59.047217] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.383 [2024-07-26 21:10:59.107346] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:24.383 A filename is required. 00:07:24.383 21:10:59 -- common/autotest_common.sh@643 -- # es=234 00:07:24.383 21:10:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:24.383 21:10:59 -- common/autotest_common.sh@652 -- # es=106 00:07:24.383 21:10:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:24.383 21:10:59 -- common/autotest_common.sh@660 -- # es=1 00:07:24.383 21:10:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:24.383 00:07:24.383 real 0m0.315s 00:07:24.383 user 0m0.197s 00:07:24.383 sys 0m0.142s 00:07:24.383 21:10:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.383 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.383 ************************************ 00:07:24.383 END TEST accel_missing_filename 00:07:24.383 ************************************ 00:07:24.383 21:10:59 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.383 21:10:59 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:24.383 21:10:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.383 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.383 ************************************ 00:07:24.383 START TEST accel_compress_verify 00:07:24.383 ************************************ 00:07:24.383 21:10:59 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.383 21:10:59 -- common/autotest_common.sh@640 -- # local es=0 00:07:24.383 21:10:59 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.383 21:10:59 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:24.383 21:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.383 21:10:59 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:24.383 21:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.383 21:10:59 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.383 21:10:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.383 21:10:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.383 21:10:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.383 21:10:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.383 21:10:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.383 21:10:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.383 21:10:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.383 21:10:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.383 21:10:59 -- accel/accel.sh@42 -- # jq -r . 00:07:24.383 [2024-07-26 21:10:59.240643] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:24.383 [2024-07-26 21:10:59.240708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517316 ] 00:07:24.642 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.642 [2024-07-26 21:10:59.325502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.642 [2024-07-26 21:10:59.361142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.642 [2024-07-26 21:10:59.402226] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.642 [2024-07-26 21:10:59.462118] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:24.902 00:07:24.902 Compression does not support the verify option, aborting. 00:07:24.902 21:10:59 -- common/autotest_common.sh@643 -- # es=161 00:07:24.902 21:10:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:24.902 21:10:59 -- common/autotest_common.sh@652 -- # es=33 00:07:24.902 21:10:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:24.902 21:10:59 -- common/autotest_common.sh@660 -- # es=1 00:07:24.902 21:10:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:24.902 00:07:24.902 real 0m0.313s 00:07:24.902 user 0m0.205s 00:07:24.902 sys 0m0.145s 00:07:24.902 21:10:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.902 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.902 ************************************ 00:07:24.902 END TEST accel_compress_verify 00:07:24.902 ************************************ 00:07:24.902 21:10:59 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:24.902 21:10:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:24.902 21:10:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.902 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.902 ************************************ 00:07:24.902 START TEST accel_wrong_workload 00:07:24.902 ************************************ 00:07:24.902 21:10:59 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:24.902 21:10:59 -- common/autotest_common.sh@640 -- # local es=0 00:07:24.902 21:10:59 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:24.902 21:10:59 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:24.902 21:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.902 21:10:59 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:24.902 21:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.902 21:10:59 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:24.902 21:10:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:24.902 21:10:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.902 21:10:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.902 21:10:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.902 21:10:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.902 21:10:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.902 21:10:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.902 21:10:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.902 21:10:59 -- accel/accel.sh@42 -- # jq -r . 00:07:24.902 Unsupported workload type: foobar 00:07:24.902 [2024-07-26 21:10:59.593850] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:24.902 accel_perf options: 00:07:24.902 [-h help message] 00:07:24.902 [-q queue depth per core] 00:07:24.902 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:24.902 [-T number of threads per core 00:07:24.902 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:24.902 [-t time in seconds] 00:07:24.902 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:24.902 [ dif_verify, , dif_generate, dif_generate_copy 00:07:24.902 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:24.902 [-l for compress/decompress workloads, name of uncompressed input file 00:07:24.902 [-S for crc32c workload, use this seed value (default 0) 00:07:24.902 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:24.902 [-f for fill workload, use this BYTE value (default 255) 00:07:24.902 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:24.902 [-y verify result if this switch is on] 00:07:24.902 [-a tasks to allocate per core (default: same value as -q)] 00:07:24.902 Can be used to spread operations across a wider range of memory. 00:07:24.902 21:10:59 -- common/autotest_common.sh@643 -- # es=1 00:07:24.902 21:10:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:24.902 21:10:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:24.902 21:10:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:24.902 00:07:24.902 real 0m0.034s 00:07:24.902 user 0m0.018s 00:07:24.902 sys 0m0.016s 00:07:24.902 21:10:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.902 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.902 ************************************ 00:07:24.902 END TEST accel_wrong_workload 00:07:24.902 ************************************ 00:07:24.902 Error: writing output failed: Broken pipe 00:07:24.902 21:10:59 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:24.902 21:10:59 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:24.902 21:10:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.902 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.902 ************************************ 00:07:24.902 START TEST accel_negative_buffers 00:07:24.902 ************************************ 00:07:24.902 21:10:59 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:24.902 21:10:59 -- common/autotest_common.sh@640 -- # local es=0 00:07:24.902 21:10:59 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:24.902 21:10:59 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:24.902 21:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.902 21:10:59 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:24.902 21:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:24.902 21:10:59 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:24.902 21:10:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:24.902 21:10:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.902 21:10:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.902 21:10:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.902 21:10:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.902 21:10:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.902 21:10:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.902 21:10:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.902 21:10:59 -- accel/accel.sh@42 -- # jq -r . 00:07:24.902 -x option must be non-negative. 00:07:24.902 [2024-07-26 21:10:59.669271] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:24.902 accel_perf options: 00:07:24.902 [-h help message] 00:07:24.902 [-q queue depth per core] 00:07:24.902 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:24.902 [-T number of threads per core 00:07:24.902 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:24.902 [-t time in seconds] 00:07:24.902 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:24.902 [ dif_verify, , dif_generate, dif_generate_copy 00:07:24.902 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:24.902 [-l for compress/decompress workloads, name of uncompressed input file 00:07:24.902 [-S for crc32c workload, use this seed value (default 0) 00:07:24.902 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:24.902 [-f for fill workload, use this BYTE value (default 255) 00:07:24.902 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:24.902 [-y verify result if this switch is on] 00:07:24.902 [-a tasks to allocate per core (default: same value as -q)] 00:07:24.902 Can be used to spread operations across a wider range of memory. 00:07:24.902 21:10:59 -- common/autotest_common.sh@643 -- # es=1 00:07:24.902 21:10:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:24.902 21:10:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:24.902 21:10:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:24.902 00:07:24.902 real 0m0.036s 00:07:24.902 user 0m0.020s 00:07:24.902 sys 0m0.016s 00:07:24.902 21:10:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.902 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.902 ************************************ 00:07:24.902 END TEST accel_negative_buffers 00:07:24.902 ************************************ 00:07:24.902 Error: writing output failed: Broken pipe 00:07:24.903 21:10:59 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:24.903 21:10:59 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:24.903 21:10:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.903 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.903 ************************************ 00:07:24.903 START TEST accel_crc32c 00:07:24.903 ************************************ 00:07:24.903 21:10:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:24.903 21:10:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.903 21:10:59 -- accel/accel.sh@17 -- # local accel_module 00:07:24.903 21:10:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:24.903 21:10:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:24.903 21:10:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.903 21:10:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.903 21:10:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.903 21:10:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.903 21:10:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.903 21:10:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.903 21:10:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.903 21:10:59 -- accel/accel.sh@42 -- # jq -r . 00:07:24.903 [2024-07-26 21:10:59.745387] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:24.903 [2024-07-26 21:10:59.745465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517573 ] 00:07:25.162 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.162 [2024-07-26 21:10:59.830861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.162 [2024-07-26 21:10:59.867129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.542 21:11:01 -- accel/accel.sh@18 -- # out=' 00:07:26.542 SPDK Configuration: 00:07:26.542 Core mask: 0x1 00:07:26.542 00:07:26.542 Accel Perf Configuration: 00:07:26.542 Workload Type: crc32c 00:07:26.542 CRC-32C seed: 32 00:07:26.542 Transfer size: 4096 bytes 00:07:26.542 Vector count 1 00:07:26.542 Module: software 00:07:26.542 Queue depth: 32 00:07:26.542 Allocate depth: 32 00:07:26.542 # threads/core: 1 00:07:26.542 Run time: 1 seconds 00:07:26.542 Verify: Yes 00:07:26.542 00:07:26.542 Running for 1 seconds... 00:07:26.542 00:07:26.542 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.542 ------------------------------------------------------------------------------------ 00:07:26.542 0,0 602304/s 2352 MiB/s 0 0 00:07:26.542 ==================================================================================== 00:07:26.542 Total 602304/s 2352 MiB/s 0 0' 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:26.542 21:11:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:26.542 21:11:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.542 21:11:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.542 21:11:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.542 21:11:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.542 21:11:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.542 21:11:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.542 21:11:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.542 21:11:01 -- accel/accel.sh@42 -- # jq -r . 00:07:26.542 [2024-07-26 21:11:01.060876] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:26.542 [2024-07-26 21:11:01.060942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517762 ] 00:07:26.542 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.542 [2024-07-26 21:11:01.146805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.542 [2024-07-26 21:11:01.182239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val= 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val= 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=0x1 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val= 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val= 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=crc32c 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=32 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val= 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=software 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=32 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=32 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=1 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val=Yes 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val= 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:26.542 21:11:01 -- accel/accel.sh@21 -- # val= 00:07:26.542 21:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # IFS=: 00:07:26.542 21:11:01 -- accel/accel.sh@20 -- # read -r var val 00:07:27.480 21:11:02 -- accel/accel.sh@21 -- # val= 00:07:27.480 21:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:27.480 21:11:02 -- accel/accel.sh@21 -- # val= 00:07:27.480 21:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:27.480 21:11:02 -- accel/accel.sh@21 -- # val= 00:07:27.480 21:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:27.480 21:11:02 -- accel/accel.sh@21 -- # val= 00:07:27.480 21:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:27.480 21:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:27.739 21:11:02 -- accel/accel.sh@21 -- # val= 00:07:27.739 21:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.739 21:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:27.739 21:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:27.739 21:11:02 -- accel/accel.sh@21 -- # val= 00:07:27.739 21:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.739 21:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:27.739 21:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:27.739 21:11:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.739 21:11:02 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:27.739 21:11:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.739 00:07:27.739 real 0m2.636s 00:07:27.739 user 0m2.349s 00:07:27.739 sys 0m0.297s 00:07:27.739 21:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.739 21:11:02 -- common/autotest_common.sh@10 -- # set +x 00:07:27.739 ************************************ 00:07:27.739 END TEST accel_crc32c 00:07:27.739 ************************************ 00:07:27.739 21:11:02 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:27.739 21:11:02 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:27.739 21:11:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.740 21:11:02 -- common/autotest_common.sh@10 -- # set +x 00:07:27.740 ************************************ 00:07:27.740 START TEST accel_crc32c_C2 00:07:27.740 ************************************ 00:07:27.740 21:11:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:27.740 21:11:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.740 21:11:02 -- accel/accel.sh@17 -- # local accel_module 00:07:27.740 21:11:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:27.740 21:11:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:27.740 21:11:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.740 21:11:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.740 21:11:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.740 21:11:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.740 21:11:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.740 21:11:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.740 21:11:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.740 21:11:02 -- accel/accel.sh@42 -- # jq -r . 00:07:27.740 [2024-07-26 21:11:02.428339] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:27.740 [2024-07-26 21:11:02.428405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517965 ] 00:07:27.740 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.740 [2024-07-26 21:11:02.514742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.740 [2024-07-26 21:11:02.550337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.116 21:11:03 -- accel/accel.sh@18 -- # out=' 00:07:29.116 SPDK Configuration: 00:07:29.116 Core mask: 0x1 00:07:29.116 00:07:29.116 Accel Perf Configuration: 00:07:29.116 Workload Type: crc32c 00:07:29.116 CRC-32C seed: 0 00:07:29.116 Transfer size: 4096 bytes 00:07:29.116 Vector count 2 00:07:29.116 Module: software 00:07:29.116 Queue depth: 32 00:07:29.116 Allocate depth: 32 00:07:29.116 # threads/core: 1 00:07:29.117 Run time: 1 seconds 00:07:29.117 Verify: Yes 00:07:29.117 00:07:29.117 Running for 1 seconds... 00:07:29.117 00:07:29.117 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.117 ------------------------------------------------------------------------------------ 00:07:29.117 0,0 480544/s 3754 MiB/s 0 0 00:07:29.117 ==================================================================================== 00:07:29.117 Total 480544/s 1877 MiB/s 0 0' 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:29.117 21:11:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:29.117 21:11:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.117 21:11:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.117 21:11:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.117 21:11:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.117 21:11:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.117 21:11:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.117 21:11:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.117 21:11:03 -- accel/accel.sh@42 -- # jq -r . 00:07:29.117 [2024-07-26 21:11:03.732227] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:29.117 [2024-07-26 21:11:03.732290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518147 ] 00:07:29.117 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.117 [2024-07-26 21:11:03.817229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.117 [2024-07-26 21:11:03.853794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val= 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val= 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=0x1 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val= 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val= 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=crc32c 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=0 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val= 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=software 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=32 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=32 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=1 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val=Yes 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val= 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:29.117 21:11:03 -- accel/accel.sh@21 -- # val= 00:07:29.117 21:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:29.117 21:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.493 21:11:05 -- accel/accel.sh@21 -- # val= 00:07:30.493 21:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.493 21:11:05 -- accel/accel.sh@20 -- # IFS=: 00:07:30.493 21:11:05 -- accel/accel.sh@20 -- # read -r var val 00:07:30.493 21:11:05 -- accel/accel.sh@21 -- # val= 00:07:30.493 21:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.493 21:11:05 -- accel/accel.sh@20 -- # IFS=: 00:07:30.493 21:11:05 -- accel/accel.sh@20 -- # read -r var val 00:07:30.493 21:11:05 -- accel/accel.sh@21 -- # val= 00:07:30.494 21:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # IFS=: 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # read -r var val 00:07:30.494 21:11:05 -- accel/accel.sh@21 -- # val= 00:07:30.494 21:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # IFS=: 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # read -r var val 00:07:30.494 21:11:05 -- accel/accel.sh@21 -- # val= 00:07:30.494 21:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # IFS=: 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # read -r var val 00:07:30.494 21:11:05 -- accel/accel.sh@21 -- # val= 00:07:30.494 21:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # IFS=: 00:07:30.494 21:11:05 -- accel/accel.sh@20 -- # read -r var val 00:07:30.494 21:11:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.494 21:11:05 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:30.494 21:11:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.494 00:07:30.494 real 0m2.623s 00:07:30.494 user 0m2.348s 00:07:30.494 sys 0m0.284s 00:07:30.494 21:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.494 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:30.494 ************************************ 00:07:30.494 END TEST accel_crc32c_C2 00:07:30.494 ************************************ 00:07:30.494 21:11:05 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:30.494 21:11:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:30.494 21:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.494 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:30.494 ************************************ 00:07:30.494 START TEST accel_copy 00:07:30.494 ************************************ 00:07:30.494 21:11:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:30.494 21:11:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.494 21:11:05 -- accel/accel.sh@17 -- # local accel_module 00:07:30.494 21:11:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:30.494 21:11:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:30.494 21:11:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.494 21:11:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.494 21:11:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.494 21:11:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.494 21:11:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.494 21:11:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.494 21:11:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.494 21:11:05 -- accel/accel.sh@42 -- # jq -r . 00:07:30.494 [2024-07-26 21:11:05.098673] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:30.494 [2024-07-26 21:11:05.098744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518431 ] 00:07:30.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.494 [2024-07-26 21:11:05.182864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.494 [2024-07-26 21:11:05.219324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.871 21:11:06 -- accel/accel.sh@18 -- # out=' 00:07:31.871 SPDK Configuration: 00:07:31.872 Core mask: 0x1 00:07:31.872 00:07:31.872 Accel Perf Configuration: 00:07:31.872 Workload Type: copy 00:07:31.872 Transfer size: 4096 bytes 00:07:31.872 Vector count 1 00:07:31.872 Module: software 00:07:31.872 Queue depth: 32 00:07:31.872 Allocate depth: 32 00:07:31.872 # threads/core: 1 00:07:31.872 Run time: 1 seconds 00:07:31.872 Verify: Yes 00:07:31.872 00:07:31.872 Running for 1 seconds... 00:07:31.872 00:07:31.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.872 ------------------------------------------------------------------------------------ 00:07:31.872 0,0 449856/s 1757 MiB/s 0 0 00:07:31.872 ==================================================================================== 00:07:31.872 Total 449856/s 1757 MiB/s 0 0' 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:31.872 21:11:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:31.872 21:11:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.872 21:11:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.872 21:11:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.872 21:11:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.872 21:11:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.872 21:11:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.872 21:11:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.872 21:11:06 -- accel/accel.sh@42 -- # jq -r . 00:07:31.872 [2024-07-26 21:11:06.412488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:31.872 [2024-07-26 21:11:06.412567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518711 ] 00:07:31.872 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.872 [2024-07-26 21:11:06.498205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.872 [2024-07-26 21:11:06.535430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val= 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val= 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val=0x1 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val= 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val= 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val=copy 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val= 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val=software 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val=32 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val=32 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val=1 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val=Yes 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val= 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:31.872 21:11:06 -- accel/accel.sh@21 -- # val= 00:07:31.872 21:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # IFS=: 00:07:31.872 21:11:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.249 21:11:07 -- accel/accel.sh@21 -- # val= 00:07:33.249 21:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # IFS=: 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # read -r var val 00:07:33.249 21:11:07 -- accel/accel.sh@21 -- # val= 00:07:33.249 21:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # IFS=: 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # read -r var val 00:07:33.249 21:11:07 -- accel/accel.sh@21 -- # val= 00:07:33.249 21:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # IFS=: 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # read -r var val 00:07:33.249 21:11:07 -- accel/accel.sh@21 -- # val= 00:07:33.249 21:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # IFS=: 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # read -r var val 00:07:33.249 21:11:07 -- accel/accel.sh@21 -- # val= 00:07:33.249 21:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # IFS=: 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # read -r var val 00:07:33.249 21:11:07 -- accel/accel.sh@21 -- # val= 00:07:33.249 21:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # IFS=: 00:07:33.249 21:11:07 -- accel/accel.sh@20 -- # read -r var val 00:07:33.249 21:11:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.249 21:11:07 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:33.249 21:11:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.249 00:07:33.249 real 0m2.635s 00:07:33.249 user 0m2.354s 00:07:33.249 sys 0m0.288s 00:07:33.249 21:11:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.249 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:07:33.249 ************************************ 00:07:33.249 END TEST accel_copy 00:07:33.249 ************************************ 00:07:33.249 21:11:07 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.249 21:11:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:33.249 21:11:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.249 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:07:33.249 ************************************ 00:07:33.249 START TEST accel_fill 00:07:33.249 ************************************ 00:07:33.249 21:11:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.249 21:11:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.249 21:11:07 -- accel/accel.sh@17 -- # local accel_module 00:07:33.249 21:11:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.249 21:11:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:33.249 21:11:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.249 21:11:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.249 21:11:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.249 21:11:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.249 21:11:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.249 21:11:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.249 21:11:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.249 21:11:07 -- accel/accel.sh@42 -- # jq -r . 00:07:33.249 [2024-07-26 21:11:07.773019] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:33.249 [2024-07-26 21:11:07.773084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518993 ] 00:07:33.249 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.249 [2024-07-26 21:11:07.857057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.249 [2024-07-26 21:11:07.892554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.627 21:11:09 -- accel/accel.sh@18 -- # out=' 00:07:34.627 SPDK Configuration: 00:07:34.627 Core mask: 0x1 00:07:34.627 00:07:34.627 Accel Perf Configuration: 00:07:34.627 Workload Type: fill 00:07:34.627 Fill pattern: 0x80 00:07:34.627 Transfer size: 4096 bytes 00:07:34.627 Vector count 1 00:07:34.627 Module: software 00:07:34.627 Queue depth: 64 00:07:34.627 Allocate depth: 64 00:07:34.627 # threads/core: 1 00:07:34.627 Run time: 1 seconds 00:07:34.627 Verify: Yes 00:07:34.627 00:07:34.627 Running for 1 seconds... 00:07:34.627 00:07:34.627 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.627 ------------------------------------------------------------------------------------ 00:07:34.627 0,0 699392/s 2732 MiB/s 0 0 00:07:34.627 ==================================================================================== 00:07:34.627 Total 699392/s 2732 MiB/s 0 0' 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:34.627 21:11:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:34.627 21:11:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.627 21:11:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.627 21:11:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.627 21:11:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.627 21:11:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.627 21:11:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.627 21:11:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.627 21:11:09 -- accel/accel.sh@42 -- # jq -r . 00:07:34.627 [2024-07-26 21:11:09.084842] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:34.627 [2024-07-26 21:11:09.084926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519261 ] 00:07:34.627 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.627 [2024-07-26 21:11:09.168857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.627 [2024-07-26 21:11:09.203136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val= 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val= 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=0x1 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val= 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val= 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=fill 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=0x80 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val= 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=software 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=64 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=64 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=1 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.627 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.627 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.627 21:11:09 -- accel/accel.sh@21 -- # val=Yes 00:07:34.628 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.628 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.628 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.628 21:11:09 -- accel/accel.sh@21 -- # val= 00:07:34.628 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.628 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.628 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:34.628 21:11:09 -- accel/accel.sh@21 -- # val= 00:07:34.628 21:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.628 21:11:09 -- accel/accel.sh@20 -- # IFS=: 00:07:34.628 21:11:09 -- accel/accel.sh@20 -- # read -r var val 00:07:35.566 21:11:10 -- accel/accel.sh@21 -- # val= 00:07:35.566 21:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # IFS=: 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # read -r var val 00:07:35.566 21:11:10 -- accel/accel.sh@21 -- # val= 00:07:35.566 21:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # IFS=: 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # read -r var val 00:07:35.566 21:11:10 -- accel/accel.sh@21 -- # val= 00:07:35.566 21:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # IFS=: 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # read -r var val 00:07:35.566 21:11:10 -- accel/accel.sh@21 -- # val= 00:07:35.566 21:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # IFS=: 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # read -r var val 00:07:35.566 21:11:10 -- accel/accel.sh@21 -- # val= 00:07:35.566 21:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # IFS=: 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # read -r var val 00:07:35.566 21:11:10 -- accel/accel.sh@21 -- # val= 00:07:35.566 21:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # IFS=: 00:07:35.566 21:11:10 -- accel/accel.sh@20 -- # read -r var val 00:07:35.566 21:11:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.566 21:11:10 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:35.566 21:11:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.566 00:07:35.566 real 0m2.629s 00:07:35.566 user 0m2.352s 00:07:35.566 sys 0m0.285s 00:07:35.566 21:11:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.566 21:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:35.566 ************************************ 00:07:35.566 END TEST accel_fill 00:07:35.566 ************************************ 00:07:35.566 21:11:10 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:35.566 21:11:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:35.566 21:11:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.566 21:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:35.566 ************************************ 00:07:35.566 START TEST accel_copy_crc32c 00:07:35.566 ************************************ 00:07:35.566 21:11:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:35.566 21:11:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.566 21:11:10 -- accel/accel.sh@17 -- # local accel_module 00:07:35.566 21:11:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:35.566 21:11:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:35.566 21:11:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.566 21:11:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.566 21:11:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.566 21:11:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.566 21:11:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.566 21:11:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.566 21:11:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.566 21:11:10 -- accel/accel.sh@42 -- # jq -r . 00:07:35.825 [2024-07-26 21:11:10.447977] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:35.825 [2024-07-26 21:11:10.448057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519527 ] 00:07:35.825 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.825 [2024-07-26 21:11:10.532722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.825 [2024-07-26 21:11:10.567792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.204 21:11:11 -- accel/accel.sh@18 -- # out=' 00:07:37.204 SPDK Configuration: 00:07:37.204 Core mask: 0x1 00:07:37.204 00:07:37.204 Accel Perf Configuration: 00:07:37.204 Workload Type: copy_crc32c 00:07:37.204 CRC-32C seed: 0 00:07:37.204 Vector size: 4096 bytes 00:07:37.204 Transfer size: 4096 bytes 00:07:37.204 Vector count 1 00:07:37.204 Module: software 00:07:37.204 Queue depth: 32 00:07:37.204 Allocate depth: 32 00:07:37.204 # threads/core: 1 00:07:37.204 Run time: 1 seconds 00:07:37.204 Verify: Yes 00:07:37.204 00:07:37.204 Running for 1 seconds... 00:07:37.204 00:07:37.204 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.204 ------------------------------------------------------------------------------------ 00:07:37.204 0,0 341824/s 1335 MiB/s 0 0 00:07:37.204 ==================================================================================== 00:07:37.204 Total 341824/s 1335 MiB/s 0 0' 00:07:37.204 21:11:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:37.204 21:11:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.204 21:11:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.204 21:11:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.204 21:11:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.204 21:11:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.204 21:11:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.204 21:11:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.204 21:11:11 -- accel/accel.sh@42 -- # jq -r . 00:07:37.204 [2024-07-26 21:11:11.748140] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:37.204 [2024-07-26 21:11:11.748196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519698 ] 00:07:37.204 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.204 [2024-07-26 21:11:11.828394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.204 [2024-07-26 21:11:11.863395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val= 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val= 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=0x1 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val= 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val= 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=0 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val= 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=software 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=32 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=32 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=1 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val=Yes 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val= 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:37.204 21:11:11 -- accel/accel.sh@21 -- # val= 00:07:37.204 21:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # IFS=: 00:07:37.204 21:11:11 -- accel/accel.sh@20 -- # read -r var val 00:07:38.581 21:11:13 -- accel/accel.sh@21 -- # val= 00:07:38.581 21:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # IFS=: 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # read -r var val 00:07:38.581 21:11:13 -- accel/accel.sh@21 -- # val= 00:07:38.581 21:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # IFS=: 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # read -r var val 00:07:38.581 21:11:13 -- accel/accel.sh@21 -- # val= 00:07:38.581 21:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # IFS=: 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # read -r var val 00:07:38.581 21:11:13 -- accel/accel.sh@21 -- # val= 00:07:38.581 21:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # IFS=: 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # read -r var val 00:07:38.581 21:11:13 -- accel/accel.sh@21 -- # val= 00:07:38.581 21:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # IFS=: 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # read -r var val 00:07:38.581 21:11:13 -- accel/accel.sh@21 -- # val= 00:07:38.581 21:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # IFS=: 00:07:38.581 21:11:13 -- accel/accel.sh@20 -- # read -r var val 00:07:38.581 21:11:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.581 21:11:13 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:38.581 21:11:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.581 00:07:38.581 real 0m2.612s 00:07:38.581 user 0m2.356s 00:07:38.581 sys 0m0.266s 00:07:38.581 21:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.581 21:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:38.581 ************************************ 00:07:38.581 END TEST accel_copy_crc32c 00:07:38.581 ************************************ 00:07:38.581 21:11:13 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:38.581 21:11:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:38.581 21:11:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.581 21:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:38.581 ************************************ 00:07:38.581 START TEST accel_copy_crc32c_C2 00:07:38.581 ************************************ 00:07:38.581 21:11:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:38.581 21:11:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.581 21:11:13 -- accel/accel.sh@17 -- # local accel_module 00:07:38.581 21:11:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:38.581 21:11:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:38.581 21:11:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.581 21:11:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.581 21:11:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.581 21:11:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.581 21:11:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.581 21:11:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.581 21:11:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.581 21:11:13 -- accel/accel.sh@42 -- # jq -r . 00:07:38.581 [2024-07-26 21:11:13.101884] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:38.581 [2024-07-26 21:11:13.101952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519873 ] 00:07:38.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.581 [2024-07-26 21:11:13.187290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.581 [2024-07-26 21:11:13.222775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.956 21:11:14 -- accel/accel.sh@18 -- # out=' 00:07:39.956 SPDK Configuration: 00:07:39.956 Core mask: 0x1 00:07:39.956 00:07:39.956 Accel Perf Configuration: 00:07:39.956 Workload Type: copy_crc32c 00:07:39.956 CRC-32C seed: 0 00:07:39.956 Vector size: 4096 bytes 00:07:39.956 Transfer size: 8192 bytes 00:07:39.956 Vector count 2 00:07:39.956 Module: software 00:07:39.956 Queue depth: 32 00:07:39.956 Allocate depth: 32 00:07:39.956 # threads/core: 1 00:07:39.956 Run time: 1 seconds 00:07:39.956 Verify: Yes 00:07:39.956 00:07:39.956 Running for 1 seconds... 00:07:39.956 00:07:39.956 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.956 ------------------------------------------------------------------------------------ 00:07:39.956 0,0 249984/s 1953 MiB/s 0 0 00:07:39.956 ==================================================================================== 00:07:39.956 Total 249984/s 976 MiB/s 0 0' 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:39.956 21:11:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:39.956 21:11:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.956 21:11:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.956 21:11:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.956 21:11:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.956 21:11:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.956 21:11:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.956 21:11:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.956 21:11:14 -- accel/accel.sh@42 -- # jq -r . 00:07:39.956 [2024-07-26 21:11:14.414886] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:39.956 [2024-07-26 21:11:14.414952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520128 ] 00:07:39.956 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.956 [2024-07-26 21:11:14.498834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.956 [2024-07-26 21:11:14.534159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val= 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val= 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=0x1 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val= 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val= 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=0 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val= 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=software 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=32 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=32 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=1 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val=Yes 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val= 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:39.956 21:11:14 -- accel/accel.sh@21 -- # val= 00:07:39.956 21:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # IFS=: 00:07:39.956 21:11:14 -- accel/accel.sh@20 -- # read -r var val 00:07:40.891 21:11:15 -- accel/accel.sh@21 -- # val= 00:07:40.891 21:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.891 21:11:15 -- accel/accel.sh@20 -- # IFS=: 00:07:40.891 21:11:15 -- accel/accel.sh@20 -- # read -r var val 00:07:40.891 21:11:15 -- accel/accel.sh@21 -- # val= 00:07:40.892 21:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # IFS=: 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # read -r var val 00:07:40.892 21:11:15 -- accel/accel.sh@21 -- # val= 00:07:40.892 21:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # IFS=: 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # read -r var val 00:07:40.892 21:11:15 -- accel/accel.sh@21 -- # val= 00:07:40.892 21:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # IFS=: 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # read -r var val 00:07:40.892 21:11:15 -- accel/accel.sh@21 -- # val= 00:07:40.892 21:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # IFS=: 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # read -r var val 00:07:40.892 21:11:15 -- accel/accel.sh@21 -- # val= 00:07:40.892 21:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # IFS=: 00:07:40.892 21:11:15 -- accel/accel.sh@20 -- # read -r var val 00:07:40.892 21:11:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.892 21:11:15 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:40.892 21:11:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.892 00:07:40.892 real 0m2.628s 00:07:40.892 user 0m2.350s 00:07:40.892 sys 0m0.286s 00:07:40.892 21:11:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.892 21:11:15 -- common/autotest_common.sh@10 -- # set +x 00:07:40.892 ************************************ 00:07:40.892 END TEST accel_copy_crc32c_C2 00:07:40.892 ************************************ 00:07:40.892 21:11:15 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:40.892 21:11:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:40.892 21:11:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.892 21:11:15 -- common/autotest_common.sh@10 -- # set +x 00:07:40.892 ************************************ 00:07:40.892 START TEST accel_dualcast 00:07:40.892 ************************************ 00:07:40.892 21:11:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:40.892 21:11:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.892 21:11:15 -- accel/accel.sh@17 -- # local accel_module 00:07:40.892 21:11:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:40.892 21:11:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:40.892 21:11:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.892 21:11:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.892 21:11:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.892 21:11:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.892 21:11:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.892 21:11:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.892 21:11:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.892 21:11:15 -- accel/accel.sh@42 -- # jq -r . 00:07:41.150 [2024-07-26 21:11:15.773679] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:41.150 [2024-07-26 21:11:15.773756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520410 ] 00:07:41.150 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.150 [2024-07-26 21:11:15.855815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.150 [2024-07-26 21:11:15.890861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.527 21:11:17 -- accel/accel.sh@18 -- # out=' 00:07:42.527 SPDK Configuration: 00:07:42.527 Core mask: 0x1 00:07:42.527 00:07:42.527 Accel Perf Configuration: 00:07:42.527 Workload Type: dualcast 00:07:42.527 Transfer size: 4096 bytes 00:07:42.527 Vector count 1 00:07:42.527 Module: software 00:07:42.527 Queue depth: 32 00:07:42.527 Allocate depth: 32 00:07:42.527 # threads/core: 1 00:07:42.527 Run time: 1 seconds 00:07:42.527 Verify: Yes 00:07:42.527 00:07:42.527 Running for 1 seconds... 00:07:42.527 00:07:42.527 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.527 ------------------------------------------------------------------------------------ 00:07:42.527 0,0 532800/s 2081 MiB/s 0 0 00:07:42.527 ==================================================================================== 00:07:42.527 Total 532800/s 2081 MiB/s 0 0' 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:42.527 21:11:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:42.527 21:11:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.527 21:11:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.527 21:11:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.527 21:11:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.527 21:11:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.527 21:11:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.527 21:11:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.527 21:11:17 -- accel/accel.sh@42 -- # jq -r . 00:07:42.527 [2024-07-26 21:11:17.082172] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:42.527 [2024-07-26 21:11:17.082239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520678 ] 00:07:42.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.527 [2024-07-26 21:11:17.166154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.527 [2024-07-26 21:11:17.200456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val= 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val= 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val=0x1 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val= 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val= 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val=dualcast 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val= 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val=software 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val=32 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val=32 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val=1 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val=Yes 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val= 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:42.527 21:11:17 -- accel/accel.sh@21 -- # val= 00:07:42.527 21:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # IFS=: 00:07:42.527 21:11:17 -- accel/accel.sh@20 -- # read -r var val 00:07:43.528 21:11:18 -- accel/accel.sh@21 -- # val= 00:07:43.528 21:11:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # IFS=: 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # read -r var val 00:07:43.528 21:11:18 -- accel/accel.sh@21 -- # val= 00:07:43.528 21:11:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # IFS=: 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # read -r var val 00:07:43.528 21:11:18 -- accel/accel.sh@21 -- # val= 00:07:43.528 21:11:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # IFS=: 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # read -r var val 00:07:43.528 21:11:18 -- accel/accel.sh@21 -- # val= 00:07:43.528 21:11:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # IFS=: 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # read -r var val 00:07:43.528 21:11:18 -- accel/accel.sh@21 -- # val= 00:07:43.528 21:11:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # IFS=: 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # read -r var val 00:07:43.528 21:11:18 -- accel/accel.sh@21 -- # val= 00:07:43.528 21:11:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # IFS=: 00:07:43.528 21:11:18 -- accel/accel.sh@20 -- # read -r var val 00:07:43.528 21:11:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.528 21:11:18 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:43.528 21:11:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.528 00:07:43.528 real 0m2.623s 00:07:43.528 user 0m2.355s 00:07:43.528 sys 0m0.275s 00:07:43.528 21:11:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.528 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:43.528 ************************************ 00:07:43.528 END TEST accel_dualcast 00:07:43.528 ************************************ 00:07:43.787 21:11:18 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:43.787 21:11:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:43.787 21:11:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.787 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:43.787 ************************************ 00:07:43.787 START TEST accel_compare 00:07:43.787 ************************************ 00:07:43.787 21:11:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:43.787 21:11:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.787 21:11:18 -- accel/accel.sh@17 -- # local accel_module 00:07:43.787 21:11:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:43.787 21:11:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.787 21:11:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:43.787 21:11:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.787 21:11:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.787 21:11:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.787 21:11:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.787 21:11:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.787 21:11:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.787 21:11:18 -- accel/accel.sh@42 -- # jq -r . 00:07:43.787 [2024-07-26 21:11:18.439853] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:43.787 [2024-07-26 21:11:18.439917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520961 ] 00:07:43.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.787 [2024-07-26 21:11:18.523169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.787 [2024-07-26 21:11:18.558647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.162 21:11:19 -- accel/accel.sh@18 -- # out=' 00:07:45.162 SPDK Configuration: 00:07:45.162 Core mask: 0x1 00:07:45.162 00:07:45.162 Accel Perf Configuration: 00:07:45.162 Workload Type: compare 00:07:45.162 Transfer size: 4096 bytes 00:07:45.162 Vector count 1 00:07:45.162 Module: software 00:07:45.162 Queue depth: 32 00:07:45.162 Allocate depth: 32 00:07:45.162 # threads/core: 1 00:07:45.162 Run time: 1 seconds 00:07:45.162 Verify: Yes 00:07:45.162 00:07:45.162 Running for 1 seconds... 00:07:45.162 00:07:45.162 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.162 ------------------------------------------------------------------------------------ 00:07:45.162 0,0 642496/s 2509 MiB/s 0 0 00:07:45.162 ==================================================================================== 00:07:45.162 Total 642496/s 2509 MiB/s 0 0' 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:45.162 21:11:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:45.162 21:11:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.162 21:11:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.162 21:11:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.162 21:11:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.162 21:11:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.162 21:11:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.162 21:11:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.162 21:11:19 -- accel/accel.sh@42 -- # jq -r . 00:07:45.162 [2024-07-26 21:11:19.751267] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:45.162 [2024-07-26 21:11:19.751352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521233 ] 00:07:45.162 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.162 [2024-07-26 21:11:19.838815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.162 [2024-07-26 21:11:19.873136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val= 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val= 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val=0x1 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val= 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val= 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val=compare 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val= 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.162 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.162 21:11:19 -- accel/accel.sh@21 -- # val=software 00:07:45.162 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.163 21:11:19 -- accel/accel.sh@21 -- # val=32 00:07:45.163 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.163 21:11:19 -- accel/accel.sh@21 -- # val=32 00:07:45.163 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.163 21:11:19 -- accel/accel.sh@21 -- # val=1 00:07:45.163 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.163 21:11:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.163 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.163 21:11:19 -- accel/accel.sh@21 -- # val=Yes 00:07:45.163 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.163 21:11:19 -- accel/accel.sh@21 -- # val= 00:07:45.163 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:45.163 21:11:19 -- accel/accel.sh@21 -- # val= 00:07:45.163 21:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # IFS=: 00:07:45.163 21:11:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.540 21:11:21 -- accel/accel.sh@21 -- # val= 00:07:46.541 21:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # IFS=: 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # read -r var val 00:07:46.541 21:11:21 -- accel/accel.sh@21 -- # val= 00:07:46.541 21:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # IFS=: 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # read -r var val 00:07:46.541 21:11:21 -- accel/accel.sh@21 -- # val= 00:07:46.541 21:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # IFS=: 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # read -r var val 00:07:46.541 21:11:21 -- accel/accel.sh@21 -- # val= 00:07:46.541 21:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # IFS=: 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # read -r var val 00:07:46.541 21:11:21 -- accel/accel.sh@21 -- # val= 00:07:46.541 21:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # IFS=: 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # read -r var val 00:07:46.541 21:11:21 -- accel/accel.sh@21 -- # val= 00:07:46.541 21:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # IFS=: 00:07:46.541 21:11:21 -- accel/accel.sh@20 -- # read -r var val 00:07:46.541 21:11:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.541 21:11:21 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:46.541 21:11:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.541 00:07:46.541 real 0m2.630s 00:07:46.541 user 0m2.358s 00:07:46.541 sys 0m0.279s 00:07:46.541 21:11:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.541 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:07:46.541 ************************************ 00:07:46.541 END TEST accel_compare 00:07:46.541 ************************************ 00:07:46.541 21:11:21 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:46.541 21:11:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:46.541 21:11:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.541 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:07:46.541 ************************************ 00:07:46.541 START TEST accel_xor 00:07:46.541 ************************************ 00:07:46.541 21:11:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:46.541 21:11:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.541 21:11:21 -- accel/accel.sh@17 -- # local accel_module 00:07:46.541 21:11:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:46.541 21:11:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:46.541 21:11:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.541 21:11:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.541 21:11:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.541 21:11:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.541 21:11:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.541 21:11:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.541 21:11:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.541 21:11:21 -- accel/accel.sh@42 -- # jq -r . 00:07:46.541 [2024-07-26 21:11:21.098141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:46.541 [2024-07-26 21:11:21.098195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521460 ] 00:07:46.541 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.541 [2024-07-26 21:11:21.180372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.541 [2024-07-26 21:11:21.216528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.918 21:11:22 -- accel/accel.sh@18 -- # out=' 00:07:47.918 SPDK Configuration: 00:07:47.918 Core mask: 0x1 00:07:47.918 00:07:47.918 Accel Perf Configuration: 00:07:47.918 Workload Type: xor 00:07:47.918 Source buffers: 2 00:07:47.918 Transfer size: 4096 bytes 00:07:47.918 Vector count 1 00:07:47.918 Module: software 00:07:47.918 Queue depth: 32 00:07:47.918 Allocate depth: 32 00:07:47.918 # threads/core: 1 00:07:47.918 Run time: 1 seconds 00:07:47.918 Verify: Yes 00:07:47.918 00:07:47.918 Running for 1 seconds... 00:07:47.918 00:07:47.919 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.919 ------------------------------------------------------------------------------------ 00:07:47.919 0,0 508992/s 1988 MiB/s 0 0 00:07:47.919 ==================================================================================== 00:07:47.919 Total 508992/s 1988 MiB/s 0 0' 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:47.919 21:11:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:47.919 21:11:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.919 21:11:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.919 21:11:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.919 21:11:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.919 21:11:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.919 21:11:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.919 21:11:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.919 21:11:22 -- accel/accel.sh@42 -- # jq -r . 00:07:47.919 [2024-07-26 21:11:22.406176] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:47.919 [2024-07-26 21:11:22.406244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521607 ] 00:07:47.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.919 [2024-07-26 21:11:22.491487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.919 [2024-07-26 21:11:22.528085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val= 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val= 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=0x1 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val= 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val= 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=xor 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=2 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val= 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=software 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=32 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=32 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=1 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val=Yes 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val= 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:47.919 21:11:22 -- accel/accel.sh@21 -- # val= 00:07:47.919 21:11:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # IFS=: 00:07:47.919 21:11:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.856 21:11:23 -- accel/accel.sh@21 -- # val= 00:07:48.856 21:11:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # IFS=: 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # read -r var val 00:07:48.856 21:11:23 -- accel/accel.sh@21 -- # val= 00:07:48.856 21:11:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # IFS=: 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # read -r var val 00:07:48.856 21:11:23 -- accel/accel.sh@21 -- # val= 00:07:48.856 21:11:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # IFS=: 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # read -r var val 00:07:48.856 21:11:23 -- accel/accel.sh@21 -- # val= 00:07:48.856 21:11:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # IFS=: 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # read -r var val 00:07:48.856 21:11:23 -- accel/accel.sh@21 -- # val= 00:07:48.856 21:11:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # IFS=: 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # read -r var val 00:07:48.856 21:11:23 -- accel/accel.sh@21 -- # val= 00:07:48.856 21:11:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # IFS=: 00:07:48.856 21:11:23 -- accel/accel.sh@20 -- # read -r var val 00:07:48.856 21:11:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.856 21:11:23 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:48.856 21:11:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.856 00:07:48.856 real 0m2.616s 00:07:48.856 user 0m2.345s 00:07:48.856 sys 0m0.281s 00:07:48.856 21:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.856 21:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:48.856 ************************************ 00:07:48.856 END TEST accel_xor 00:07:48.856 ************************************ 00:07:49.115 21:11:23 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:49.115 21:11:23 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:49.115 21:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.115 21:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:49.115 ************************************ 00:07:49.115 START TEST accel_xor 00:07:49.115 ************************************ 00:07:49.115 21:11:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:49.115 21:11:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.115 21:11:23 -- accel/accel.sh@17 -- # local accel_module 00:07:49.115 21:11:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:49.115 21:11:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:49.115 21:11:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.115 21:11:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.115 21:11:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.115 21:11:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.115 21:11:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.115 21:11:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.115 21:11:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.115 21:11:23 -- accel/accel.sh@42 -- # jq -r . 00:07:49.115 [2024-07-26 21:11:23.767633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:49.115 [2024-07-26 21:11:23.767698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521829 ] 00:07:49.115 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.115 [2024-07-26 21:11:23.854436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.115 [2024-07-26 21:11:23.889711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.493 21:11:25 -- accel/accel.sh@18 -- # out=' 00:07:50.493 SPDK Configuration: 00:07:50.493 Core mask: 0x1 00:07:50.493 00:07:50.493 Accel Perf Configuration: 00:07:50.493 Workload Type: xor 00:07:50.493 Source buffers: 3 00:07:50.493 Transfer size: 4096 bytes 00:07:50.493 Vector count 1 00:07:50.493 Module: software 00:07:50.493 Queue depth: 32 00:07:50.493 Allocate depth: 32 00:07:50.493 # threads/core: 1 00:07:50.493 Run time: 1 seconds 00:07:50.493 Verify: Yes 00:07:50.493 00:07:50.493 Running for 1 seconds... 00:07:50.493 00:07:50.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.493 ------------------------------------------------------------------------------------ 00:07:50.493 0,0 469440/s 1833 MiB/s 0 0 00:07:50.493 ==================================================================================== 00:07:50.493 Total 469440/s 1833 MiB/s 0 0' 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:50.493 21:11:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:50.493 21:11:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.493 21:11:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.493 21:11:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.493 21:11:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.493 21:11:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.493 21:11:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.493 21:11:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.493 21:11:25 -- accel/accel.sh@42 -- # jq -r . 00:07:50.493 [2024-07-26 21:11:25.070663] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:50.493 [2024-07-26 21:11:25.070728] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522095 ] 00:07:50.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.493 [2024-07-26 21:11:25.152606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.493 [2024-07-26 21:11:25.186860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val= 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val= 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=0x1 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val= 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val= 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=xor 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=3 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val= 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=software 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=32 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=32 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=1 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val=Yes 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val= 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 21:11:25 -- accel/accel.sh@21 -- # val= 00:07:50.493 21:11:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 21:11:25 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 21:11:26 -- accel/accel.sh@21 -- # val= 00:07:51.867 21:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 21:11:26 -- accel/accel.sh@21 -- # val= 00:07:51.867 21:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 21:11:26 -- accel/accel.sh@21 -- # val= 00:07:51.867 21:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 21:11:26 -- accel/accel.sh@21 -- # val= 00:07:51.867 21:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 21:11:26 -- accel/accel.sh@21 -- # val= 00:07:51.867 21:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 21:11:26 -- accel/accel.sh@21 -- # val= 00:07:51.867 21:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 21:11:26 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 21:11:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.868 21:11:26 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:51.868 21:11:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.868 00:07:51.868 real 0m2.618s 00:07:51.868 user 0m2.342s 00:07:51.868 sys 0m0.285s 00:07:51.868 21:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.868 21:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.868 ************************************ 00:07:51.868 END TEST accel_xor 00:07:51.868 ************************************ 00:07:51.868 21:11:26 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:51.868 21:11:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:51.868 21:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.868 21:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:51.868 ************************************ 00:07:51.868 START TEST accel_dif_verify 00:07:51.868 ************************************ 00:07:51.868 21:11:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:51.868 21:11:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.868 21:11:26 -- accel/accel.sh@17 -- # local accel_module 00:07:51.868 21:11:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:51.868 21:11:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:51.868 21:11:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.868 21:11:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.868 21:11:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.868 21:11:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.868 21:11:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.868 21:11:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.868 21:11:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.868 21:11:26 -- accel/accel.sh@42 -- # jq -r . 00:07:51.868 [2024-07-26 21:11:26.428470] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:51.868 [2024-07-26 21:11:26.428535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522379 ] 00:07:51.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.868 [2024-07-26 21:11:26.512317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.868 [2024-07-26 21:11:26.547350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.245 21:11:27 -- accel/accel.sh@18 -- # out=' 00:07:53.245 SPDK Configuration: 00:07:53.245 Core mask: 0x1 00:07:53.245 00:07:53.245 Accel Perf Configuration: 00:07:53.245 Workload Type: dif_verify 00:07:53.245 Vector size: 4096 bytes 00:07:53.245 Transfer size: 4096 bytes 00:07:53.245 Block size: 512 bytes 00:07:53.245 Metadata size: 8 bytes 00:07:53.245 Vector count 1 00:07:53.245 Module: software 00:07:53.245 Queue depth: 32 00:07:53.245 Allocate depth: 32 00:07:53.245 # threads/core: 1 00:07:53.245 Run time: 1 seconds 00:07:53.245 Verify: No 00:07:53.245 00:07:53.245 Running for 1 seconds... 00:07:53.245 00:07:53.245 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.245 ------------------------------------------------------------------------------------ 00:07:53.245 0,0 138240/s 548 MiB/s 0 0 00:07:53.245 ==================================================================================== 00:07:53.245 Total 138240/s 540 MiB/s 0 0' 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:53.245 21:11:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:53.245 21:11:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.245 21:11:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.245 21:11:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.245 21:11:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.245 21:11:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.245 21:11:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.245 21:11:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.245 21:11:27 -- accel/accel.sh@42 -- # jq -r . 00:07:53.245 [2024-07-26 21:11:27.740178] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:53.245 [2024-07-26 21:11:27.740267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522651 ] 00:07:53.245 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.245 [2024-07-26 21:11:27.824711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.245 [2024-07-26 21:11:27.859035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val= 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val= 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val=0x1 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val= 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val= 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val=dif_verify 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val= 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val=software 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val=32 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val=32 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val=1 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val=No 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val= 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:53.245 21:11:27 -- accel/accel.sh@21 -- # val= 00:07:53.245 21:11:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # IFS=: 00:07:53.245 21:11:27 -- accel/accel.sh@20 -- # read -r var val 00:07:54.181 21:11:29 -- accel/accel.sh@21 -- # val= 00:07:54.181 21:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # IFS=: 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # read -r var val 00:07:54.181 21:11:29 -- accel/accel.sh@21 -- # val= 00:07:54.181 21:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # IFS=: 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # read -r var val 00:07:54.181 21:11:29 -- accel/accel.sh@21 -- # val= 00:07:54.181 21:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # IFS=: 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # read -r var val 00:07:54.181 21:11:29 -- accel/accel.sh@21 -- # val= 00:07:54.181 21:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # IFS=: 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # read -r var val 00:07:54.181 21:11:29 -- accel/accel.sh@21 -- # val= 00:07:54.181 21:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # IFS=: 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # read -r var val 00:07:54.181 21:11:29 -- accel/accel.sh@21 -- # val= 00:07:54.181 21:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # IFS=: 00:07:54.181 21:11:29 -- accel/accel.sh@20 -- # read -r var val 00:07:54.181 21:11:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.181 21:11:29 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:54.181 21:11:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.181 00:07:54.181 real 0m2.627s 00:07:54.181 user 0m2.357s 00:07:54.181 sys 0m0.279s 00:07:54.181 21:11:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.181 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:54.181 ************************************ 00:07:54.181 END TEST accel_dif_verify 00:07:54.181 ************************************ 00:07:54.440 21:11:29 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:54.440 21:11:29 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:54.440 21:11:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.440 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:54.440 ************************************ 00:07:54.440 START TEST accel_dif_generate 00:07:54.440 ************************************ 00:07:54.440 21:11:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:54.440 21:11:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.440 21:11:29 -- accel/accel.sh@17 -- # local accel_module 00:07:54.440 21:11:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:54.440 21:11:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:54.440 21:11:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.440 21:11:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.440 21:11:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.440 21:11:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.440 21:11:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.440 21:11:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.440 21:11:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.440 21:11:29 -- accel/accel.sh@42 -- # jq -r . 00:07:54.440 [2024-07-26 21:11:29.096649] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:54.440 [2024-07-26 21:11:29.096715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522934 ] 00:07:54.440 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.440 [2024-07-26 21:11:29.180265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.440 [2024-07-26 21:11:29.215386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.816 21:11:30 -- accel/accel.sh@18 -- # out=' 00:07:55.816 SPDK Configuration: 00:07:55.816 Core mask: 0x1 00:07:55.816 00:07:55.816 Accel Perf Configuration: 00:07:55.816 Workload Type: dif_generate 00:07:55.816 Vector size: 4096 bytes 00:07:55.816 Transfer size: 4096 bytes 00:07:55.816 Block size: 512 bytes 00:07:55.816 Metadata size: 8 bytes 00:07:55.816 Vector count 1 00:07:55.816 Module: software 00:07:55.816 Queue depth: 32 00:07:55.816 Allocate depth: 32 00:07:55.816 # threads/core: 1 00:07:55.816 Run time: 1 seconds 00:07:55.816 Verify: No 00:07:55.816 00:07:55.816 Running for 1 seconds... 00:07:55.816 00:07:55.816 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.816 ------------------------------------------------------------------------------------ 00:07:55.816 0,0 165952/s 658 MiB/s 0 0 00:07:55.816 ==================================================================================== 00:07:55.816 Total 165952/s 648 MiB/s 0 0' 00:07:55.816 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.816 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.816 21:11:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:55.816 21:11:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:55.817 21:11:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.817 21:11:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.817 21:11:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.817 21:11:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.817 21:11:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.817 21:11:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.817 21:11:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.817 21:11:30 -- accel/accel.sh@42 -- # jq -r . 00:07:55.817 [2024-07-26 21:11:30.409226] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:55.817 [2024-07-26 21:11:30.409298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523156 ] 00:07:55.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.817 [2024-07-26 21:11:30.495667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.817 [2024-07-26 21:11:30.530858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val= 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val= 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val=0x1 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val= 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val= 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val=dif_generate 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val= 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val=software 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val=32 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val=32 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val=1 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val=No 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val= 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:55.817 21:11:30 -- accel/accel.sh@21 -- # val= 00:07:55.817 21:11:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # IFS=: 00:07:55.817 21:11:30 -- accel/accel.sh@20 -- # read -r var val 00:07:57.194 21:11:31 -- accel/accel.sh@21 -- # val= 00:07:57.194 21:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # IFS=: 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # read -r var val 00:07:57.194 21:11:31 -- accel/accel.sh@21 -- # val= 00:07:57.194 21:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # IFS=: 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # read -r var val 00:07:57.194 21:11:31 -- accel/accel.sh@21 -- # val= 00:07:57.194 21:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # IFS=: 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # read -r var val 00:07:57.194 21:11:31 -- accel/accel.sh@21 -- # val= 00:07:57.194 21:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # IFS=: 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # read -r var val 00:07:57.194 21:11:31 -- accel/accel.sh@21 -- # val= 00:07:57.194 21:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # IFS=: 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # read -r var val 00:07:57.194 21:11:31 -- accel/accel.sh@21 -- # val= 00:07:57.194 21:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # IFS=: 00:07:57.194 21:11:31 -- accel/accel.sh@20 -- # read -r var val 00:07:57.194 21:11:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:57.194 21:11:31 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:57.194 21:11:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.194 00:07:57.194 real 0m2.631s 00:07:57.194 user 0m2.357s 00:07:57.194 sys 0m0.285s 00:07:57.194 21:11:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.194 21:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:57.194 ************************************ 00:07:57.194 END TEST accel_dif_generate 00:07:57.194 ************************************ 00:07:57.194 21:11:31 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:57.194 21:11:31 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:57.194 21:11:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.194 21:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:57.194 ************************************ 00:07:57.194 START TEST accel_dif_generate_copy 00:07:57.194 ************************************ 00:07:57.194 21:11:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:57.194 21:11:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.194 21:11:31 -- accel/accel.sh@17 -- # local accel_module 00:07:57.194 21:11:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:57.194 21:11:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:57.194 21:11:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.194 21:11:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.194 21:11:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.194 21:11:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.194 21:11:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.194 21:11:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.194 21:11:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.194 21:11:31 -- accel/accel.sh@42 -- # jq -r . 00:07:57.194 [2024-07-26 21:11:31.773933] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:57.194 [2024-07-26 21:11:31.773998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523355 ] 00:07:57.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.194 [2024-07-26 21:11:31.858354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.194 [2024-07-26 21:11:31.893796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.576 21:11:33 -- accel/accel.sh@18 -- # out=' 00:07:58.576 SPDK Configuration: 00:07:58.576 Core mask: 0x1 00:07:58.576 00:07:58.576 Accel Perf Configuration: 00:07:58.576 Workload Type: dif_generate_copy 00:07:58.576 Vector size: 4096 bytes 00:07:58.576 Transfer size: 4096 bytes 00:07:58.576 Vector count 1 00:07:58.576 Module: software 00:07:58.576 Queue depth: 32 00:07:58.576 Allocate depth: 32 00:07:58.576 # threads/core: 1 00:07:58.576 Run time: 1 seconds 00:07:58.576 Verify: No 00:07:58.576 00:07:58.576 Running for 1 seconds... 00:07:58.576 00:07:58.576 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.576 ------------------------------------------------------------------------------------ 00:07:58.576 0,0 128960/s 511 MiB/s 0 0 00:07:58.576 ==================================================================================== 00:07:58.576 Total 128960/s 503 MiB/s 0 0' 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:58.576 21:11:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:58.576 21:11:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.576 21:11:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.576 21:11:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.576 21:11:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.576 21:11:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.576 21:11:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.576 21:11:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.576 21:11:33 -- accel/accel.sh@42 -- # jq -r . 00:07:58.576 [2024-07-26 21:11:33.086773] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:58.576 [2024-07-26 21:11:33.086840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523519 ] 00:07:58.576 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.576 [2024-07-26 21:11:33.171868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.576 [2024-07-26 21:11:33.206398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val= 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val= 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val=0x1 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val= 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val= 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val= 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val=software 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val=32 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val=32 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val=1 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val=No 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val= 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:58.576 21:11:33 -- accel/accel.sh@21 -- # val= 00:07:58.576 21:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # IFS=: 00:07:58.576 21:11:33 -- accel/accel.sh@20 -- # read -r var val 00:07:59.511 21:11:34 -- accel/accel.sh@21 -- # val= 00:07:59.511 21:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # IFS=: 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # read -r var val 00:07:59.511 21:11:34 -- accel/accel.sh@21 -- # val= 00:07:59.511 21:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # IFS=: 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # read -r var val 00:07:59.511 21:11:34 -- accel/accel.sh@21 -- # val= 00:07:59.511 21:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # IFS=: 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # read -r var val 00:07:59.511 21:11:34 -- accel/accel.sh@21 -- # val= 00:07:59.511 21:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # IFS=: 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # read -r var val 00:07:59.511 21:11:34 -- accel/accel.sh@21 -- # val= 00:07:59.511 21:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # IFS=: 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # read -r var val 00:07:59.511 21:11:34 -- accel/accel.sh@21 -- # val= 00:07:59.511 21:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # IFS=: 00:07:59.511 21:11:34 -- accel/accel.sh@20 -- # read -r var val 00:07:59.511 21:11:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.511 21:11:34 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:59.511 21:11:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.511 00:07:59.511 real 0m2.629s 00:07:59.511 user 0m2.362s 00:07:59.511 sys 0m0.275s 00:07:59.511 21:11:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.511 21:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:59.511 ************************************ 00:07:59.511 END TEST accel_dif_generate_copy 00:07:59.511 ************************************ 00:07:59.770 21:11:34 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:59.770 21:11:34 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:59.770 21:11:34 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:59.770 21:11:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.770 21:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:59.770 ************************************ 00:07:59.770 START TEST accel_comp 00:07:59.770 ************************************ 00:07:59.770 21:11:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:59.770 21:11:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.770 21:11:34 -- accel/accel.sh@17 -- # local accel_module 00:07:59.770 21:11:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:59.770 21:11:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:59.770 21:11:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.770 21:11:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.770 21:11:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.770 21:11:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.770 21:11:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.770 21:11:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.770 21:11:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.770 21:11:34 -- accel/accel.sh@42 -- # jq -r . 00:07:59.770 [2024-07-26 21:11:34.451401] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:59.770 [2024-07-26 21:11:34.451467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523794 ] 00:07:59.770 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.770 [2024-07-26 21:11:34.535979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.770 [2024-07-26 21:11:34.571069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.147 21:11:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:01.147 00:08:01.147 SPDK Configuration: 00:08:01.147 Core mask: 0x1 00:08:01.147 00:08:01.147 Accel Perf Configuration: 00:08:01.147 Workload Type: compress 00:08:01.147 Transfer size: 4096 bytes 00:08:01.147 Vector count 1 00:08:01.147 Module: software 00:08:01.147 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:01.147 Queue depth: 32 00:08:01.147 Allocate depth: 32 00:08:01.147 # threads/core: 1 00:08:01.147 Run time: 1 seconds 00:08:01.147 Verify: No 00:08:01.147 00:08:01.147 Running for 1 seconds... 00:08:01.147 00:08:01.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.147 ------------------------------------------------------------------------------------ 00:08:01.147 0,0 63680/s 265 MiB/s 0 0 00:08:01.147 ==================================================================================== 00:08:01.147 Total 63680/s 248 MiB/s 0 0' 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:01.147 21:11:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:01.147 21:11:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.147 21:11:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.147 21:11:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.147 21:11:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.147 21:11:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.147 21:11:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.147 21:11:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.147 21:11:35 -- accel/accel.sh@42 -- # jq -r . 00:08:01.147 [2024-07-26 21:11:35.766514] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:01.147 [2024-07-26 21:11:35.766582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524062 ] 00:08:01.147 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.147 [2024-07-26 21:11:35.850575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.147 [2024-07-26 21:11:35.886093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val=0x1 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val=compress 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val=software 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@23 -- # accel_module=software 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val=32 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.147 21:11:35 -- accel/accel.sh@21 -- # val=32 00:08:01.147 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.147 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 21:11:35 -- accel/accel.sh@21 -- # val=1 00:08:01.148 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 21:11:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.148 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 21:11:35 -- accel/accel.sh@21 -- # val=No 00:08:01.148 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.148 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 21:11:35 -- accel/accel.sh@21 -- # val= 00:08:01.148 21:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 21:11:35 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 21:11:37 -- accel/accel.sh@21 -- # val= 00:08:02.526 21:11:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 21:11:37 -- accel/accel.sh@21 -- # val= 00:08:02.526 21:11:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 21:11:37 -- accel/accel.sh@21 -- # val= 00:08:02.526 21:11:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 21:11:37 -- accel/accel.sh@21 -- # val= 00:08:02.526 21:11:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 21:11:37 -- accel/accel.sh@21 -- # val= 00:08:02.526 21:11:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 21:11:37 -- accel/accel.sh@21 -- # val= 00:08:02.526 21:11:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 21:11:37 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 21:11:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:02.526 21:11:37 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:02.526 21:11:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.526 00:08:02.526 real 0m2.636s 00:08:02.526 user 0m2.360s 00:08:02.526 sys 0m0.285s 00:08:02.526 21:11:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.526 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:08:02.526 ************************************ 00:08:02.526 END TEST accel_comp 00:08:02.526 ************************************ 00:08:02.526 21:11:37 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:02.526 21:11:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:02.526 21:11:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.526 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:08:02.526 ************************************ 00:08:02.526 START TEST accel_decomp 00:08:02.526 ************************************ 00:08:02.526 21:11:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:02.526 21:11:37 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.526 21:11:37 -- accel/accel.sh@17 -- # local accel_module 00:08:02.526 21:11:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:02.526 21:11:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:02.526 21:11:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.526 21:11:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.526 21:11:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.526 21:11:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.526 21:11:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.526 21:11:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.526 21:11:37 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.526 21:11:37 -- accel/accel.sh@42 -- # jq -r . 00:08:02.526 [2024-07-26 21:11:37.128059] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:02.526 [2024-07-26 21:11:37.128143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524349 ] 00:08:02.526 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.526 [2024-07-26 21:11:37.213709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.526 [2024-07-26 21:11:37.249126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.933 21:11:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:03.933 00:08:03.933 SPDK Configuration: 00:08:03.933 Core mask: 0x1 00:08:03.933 00:08:03.933 Accel Perf Configuration: 00:08:03.933 Workload Type: decompress 00:08:03.933 Transfer size: 4096 bytes 00:08:03.933 Vector count 1 00:08:03.933 Module: software 00:08:03.933 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.933 Queue depth: 32 00:08:03.933 Allocate depth: 32 00:08:03.933 # threads/core: 1 00:08:03.933 Run time: 1 seconds 00:08:03.933 Verify: Yes 00:08:03.933 00:08:03.933 Running for 1 seconds... 00:08:03.933 00:08:03.933 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:03.933 ------------------------------------------------------------------------------------ 00:08:03.933 0,0 86304/s 159 MiB/s 0 0 00:08:03.933 ==================================================================================== 00:08:03.933 Total 86304/s 337 MiB/s 0 0' 00:08:03.933 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.933 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.933 21:11:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:03.933 21:11:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:03.933 21:11:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.933 21:11:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.933 21:11:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.933 21:11:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.934 21:11:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.934 21:11:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.934 21:11:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.934 21:11:38 -- accel/accel.sh@42 -- # jq -r . 00:08:03.934 [2024-07-26 21:11:38.444435] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:03.934 [2024-07-26 21:11:38.444522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524620 ] 00:08:03.934 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.934 [2024-07-26 21:11:38.530817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.934 [2024-07-26 21:11:38.565284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=0x1 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=decompress 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=software 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@23 -- # accel_module=software 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=32 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=32 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=1 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val=Yes 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:03.934 21:11:38 -- accel/accel.sh@21 -- # val= 00:08:03.934 21:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # IFS=: 00:08:03.934 21:11:38 -- accel/accel.sh@20 -- # read -r var val 00:08:04.871 21:11:39 -- accel/accel.sh@21 -- # val= 00:08:04.871 21:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # IFS=: 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # read -r var val 00:08:04.871 21:11:39 -- accel/accel.sh@21 -- # val= 00:08:04.871 21:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # IFS=: 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # read -r var val 00:08:04.871 21:11:39 -- accel/accel.sh@21 -- # val= 00:08:04.871 21:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # IFS=: 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # read -r var val 00:08:04.871 21:11:39 -- accel/accel.sh@21 -- # val= 00:08:04.871 21:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # IFS=: 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # read -r var val 00:08:04.871 21:11:39 -- accel/accel.sh@21 -- # val= 00:08:04.871 21:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # IFS=: 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # read -r var val 00:08:04.871 21:11:39 -- accel/accel.sh@21 -- # val= 00:08:04.871 21:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # IFS=: 00:08:04.871 21:11:39 -- accel/accel.sh@20 -- # read -r var val 00:08:04.871 21:11:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.871 21:11:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:04.871 21:11:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.871 00:08:04.871 real 0m2.640s 00:08:04.871 user 0m2.353s 00:08:04.871 sys 0m0.297s 00:08:04.871 21:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.871 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:08:04.871 ************************************ 00:08:04.871 END TEST accel_decomp 00:08:04.871 ************************************ 00:08:05.131 21:11:39 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.131 21:11:39 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:05.131 21:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.131 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.131 ************************************ 00:08:05.131 START TEST accel_decmop_full 00:08:05.131 ************************************ 00:08:05.131 21:11:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.131 21:11:39 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.131 21:11:39 -- accel/accel.sh@17 -- # local accel_module 00:08:05.131 21:11:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.131 21:11:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:05.131 21:11:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.131 21:11:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.131 21:11:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.131 21:11:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.131 21:11:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.131 21:11:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.131 21:11:39 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.131 21:11:39 -- accel/accel.sh@42 -- # jq -r . 00:08:05.131 [2024-07-26 21:11:39.808753] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:05.131 [2024-07-26 21:11:39.808832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524905 ] 00:08:05.131 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.131 [2024-07-26 21:11:39.893972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.131 [2024-07-26 21:11:39.929318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.509 21:11:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:06.509 00:08:06.509 SPDK Configuration: 00:08:06.509 Core mask: 0x1 00:08:06.509 00:08:06.509 Accel Perf Configuration: 00:08:06.509 Workload Type: decompress 00:08:06.509 Transfer size: 111250 bytes 00:08:06.509 Vector count 1 00:08:06.509 Module: software 00:08:06.509 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.509 Queue depth: 32 00:08:06.509 Allocate depth: 32 00:08:06.509 # threads/core: 1 00:08:06.509 Run time: 1 seconds 00:08:06.509 Verify: Yes 00:08:06.509 00:08:06.509 Running for 1 seconds... 00:08:06.509 00:08:06.509 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.509 ------------------------------------------------------------------------------------ 00:08:06.509 0,0 5728/s 236 MiB/s 0 0 00:08:06.509 ==================================================================================== 00:08:06.509 Total 5728/s 607 MiB/s 0 0' 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:06.509 21:11:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:06.509 21:11:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.509 21:11:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.509 21:11:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.509 21:11:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.509 21:11:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.509 21:11:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.509 21:11:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.509 21:11:41 -- accel/accel.sh@42 -- # jq -r . 00:08:06.509 [2024-07-26 21:11:41.137705] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:06.509 [2024-07-26 21:11:41.137771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525118 ] 00:08:06.509 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.509 [2024-07-26 21:11:41.222857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.509 [2024-07-26 21:11:41.258289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=0x1 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=decompress 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=software 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=32 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=32 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=1 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.509 21:11:41 -- accel/accel.sh@21 -- # val=Yes 00:08:06.509 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.509 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.510 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.510 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.510 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.510 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.510 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:06.510 21:11:41 -- accel/accel.sh@21 -- # val= 00:08:06.510 21:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.510 21:11:41 -- accel/accel.sh@20 -- # IFS=: 00:08:06.510 21:11:41 -- accel/accel.sh@20 -- # read -r var val 00:08:07.887 21:11:42 -- accel/accel.sh@21 -- # val= 00:08:07.887 21:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # IFS=: 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # read -r var val 00:08:07.887 21:11:42 -- accel/accel.sh@21 -- # val= 00:08:07.887 21:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # IFS=: 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # read -r var val 00:08:07.887 21:11:42 -- accel/accel.sh@21 -- # val= 00:08:07.887 21:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # IFS=: 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # read -r var val 00:08:07.887 21:11:42 -- accel/accel.sh@21 -- # val= 00:08:07.887 21:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # IFS=: 00:08:07.887 21:11:42 -- accel/accel.sh@20 -- # read -r var val 00:08:07.887 21:11:42 -- accel/accel.sh@21 -- # val= 00:08:07.888 21:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.888 21:11:42 -- accel/accel.sh@20 -- # IFS=: 00:08:07.888 21:11:42 -- accel/accel.sh@20 -- # read -r var val 00:08:07.888 21:11:42 -- accel/accel.sh@21 -- # val= 00:08:07.888 21:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.888 21:11:42 -- accel/accel.sh@20 -- # IFS=: 00:08:07.888 21:11:42 -- accel/accel.sh@20 -- # read -r var val 00:08:07.888 21:11:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:07.888 21:11:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:07.888 21:11:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.888 00:08:07.888 real 0m2.659s 00:08:07.888 user 0m2.383s 00:08:07.888 sys 0m0.284s 00:08:07.888 21:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.888 21:11:42 -- common/autotest_common.sh@10 -- # set +x 00:08:07.888 ************************************ 00:08:07.888 END TEST accel_decmop_full 00:08:07.888 ************************************ 00:08:07.888 21:11:42 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.888 21:11:42 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:07.888 21:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.888 21:11:42 -- common/autotest_common.sh@10 -- # set +x 00:08:07.888 ************************************ 00:08:07.888 START TEST accel_decomp_mcore 00:08:07.888 ************************************ 00:08:07.888 21:11:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.888 21:11:42 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.888 21:11:42 -- accel/accel.sh@17 -- # local accel_module 00:08:07.888 21:11:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.888 21:11:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:07.888 21:11:42 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.888 21:11:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.888 21:11:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.888 21:11:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.888 21:11:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.888 21:11:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.888 21:11:42 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.888 21:11:42 -- accel/accel.sh@42 -- # jq -r . 00:08:07.888 [2024-07-26 21:11:42.510746] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:07.888 [2024-07-26 21:11:42.510835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525317 ] 00:08:07.888 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.888 [2024-07-26 21:11:42.597438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.888 [2024-07-26 21:11:42.635314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.888 [2024-07-26 21:11:42.635412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.888 [2024-07-26 21:11:42.635503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.888 [2024-07-26 21:11:42.635505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.266 21:11:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:09.266 00:08:09.266 SPDK Configuration: 00:08:09.266 Core mask: 0xf 00:08:09.266 00:08:09.266 Accel Perf Configuration: 00:08:09.266 Workload Type: decompress 00:08:09.266 Transfer size: 4096 bytes 00:08:09.266 Vector count 1 00:08:09.266 Module: software 00:08:09.266 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:09.266 Queue depth: 32 00:08:09.266 Allocate depth: 32 00:08:09.266 # threads/core: 1 00:08:09.266 Run time: 1 seconds 00:08:09.266 Verify: Yes 00:08:09.266 00:08:09.266 Running for 1 seconds... 00:08:09.266 00:08:09.266 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:09.266 ------------------------------------------------------------------------------------ 00:08:09.266 0,0 73920/s 136 MiB/s 0 0 00:08:09.266 3,0 74592/s 137 MiB/s 0 0 00:08:09.266 2,0 74336/s 137 MiB/s 0 0 00:08:09.266 1,0 74336/s 137 MiB/s 0 0 00:08:09.266 ==================================================================================== 00:08:09.266 Total 297184/s 1160 MiB/s 0 0' 00:08:09.266 21:11:43 -- accel/accel.sh@20 -- # IFS=: 00:08:09.266 21:11:43 -- accel/accel.sh@20 -- # read -r var val 00:08:09.266 21:11:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:09.266 21:11:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:09.266 21:11:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.266 21:11:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.266 21:11:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.266 21:11:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.266 21:11:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.266 21:11:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.266 21:11:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.266 21:11:43 -- accel/accel.sh@42 -- # jq -r . 00:08:09.266 [2024-07-26 21:11:43.837119] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:09.266 [2024-07-26 21:11:43.837187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525488 ] 00:08:09.266 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.266 [2024-07-26 21:11:43.922634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.266 [2024-07-26 21:11:43.960334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.266 [2024-07-26 21:11:43.960432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.266 [2024-07-26 21:11:43.960504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.266 [2024-07-26 21:11:43.960506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.266 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.266 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.266 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.266 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.266 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.266 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.266 21:11:44 -- accel/accel.sh@21 -- # val=0xf 00:08:09.266 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.266 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.266 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.266 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.266 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.266 21:11:44 -- accel/accel.sh@21 -- # val=decompress 00:08:09.266 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.266 21:11:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:09.266 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val=software 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val=32 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val=32 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val=1 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val=Yes 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:09.267 21:11:44 -- accel/accel.sh@21 -- # val= 00:08:09.267 21:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # IFS=: 00:08:09.267 21:11:44 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@21 -- # val= 00:08:10.645 21:11:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # IFS=: 00:08:10.645 21:11:45 -- accel/accel.sh@20 -- # read -r var val 00:08:10.645 21:11:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.645 21:11:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:10.645 21:11:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.645 00:08:10.645 real 0m2.658s 00:08:10.645 user 0m9.031s 00:08:10.645 sys 0m0.296s 00:08:10.645 21:11:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.645 21:11:45 -- common/autotest_common.sh@10 -- # set +x 00:08:10.645 ************************************ 00:08:10.645 END TEST accel_decomp_mcore 00:08:10.645 ************************************ 00:08:10.645 21:11:45 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:10.645 21:11:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:10.645 21:11:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.645 21:11:45 -- common/autotest_common.sh@10 -- # set +x 00:08:10.645 ************************************ 00:08:10.645 START TEST accel_decomp_full_mcore 00:08:10.645 ************************************ 00:08:10.645 21:11:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:10.645 21:11:45 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.645 21:11:45 -- accel/accel.sh@17 -- # local accel_module 00:08:10.645 21:11:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:10.645 21:11:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:10.645 21:11:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.645 21:11:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.645 21:11:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.645 21:11:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.645 21:11:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.645 21:11:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.645 21:11:45 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.645 21:11:45 -- accel/accel.sh@42 -- # jq -r . 00:08:10.645 [2024-07-26 21:11:45.195084] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:10.645 [2024-07-26 21:11:45.195136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525775 ] 00:08:10.645 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.645 [2024-07-26 21:11:45.277190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.645 [2024-07-26 21:11:45.315388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.645 [2024-07-26 21:11:45.315486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.645 [2024-07-26 21:11:45.315572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.645 [2024-07-26 21:11:45.315574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.022 21:11:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:12.022 00:08:12.022 SPDK Configuration: 00:08:12.022 Core mask: 0xf 00:08:12.022 00:08:12.022 Accel Perf Configuration: 00:08:12.022 Workload Type: decompress 00:08:12.022 Transfer size: 111250 bytes 00:08:12.022 Vector count 1 00:08:12.022 Module: software 00:08:12.022 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:12.022 Queue depth: 32 00:08:12.022 Allocate depth: 32 00:08:12.022 # threads/core: 1 00:08:12.022 Run time: 1 seconds 00:08:12.022 Verify: Yes 00:08:12.022 00:08:12.022 Running for 1 seconds... 00:08:12.022 00:08:12.022 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:12.022 ------------------------------------------------------------------------------------ 00:08:12.022 0,0 5696/s 235 MiB/s 0 0 00:08:12.022 3,0 5728/s 236 MiB/s 0 0 00:08:12.022 2,0 5728/s 236 MiB/s 0 0 00:08:12.023 1,0 5728/s 236 MiB/s 0 0 00:08:12.023 ==================================================================================== 00:08:12.023 Total 22880/s 2427 MiB/s 0 0' 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:12.023 21:11:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:12.023 21:11:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.023 21:11:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.023 21:11:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.023 21:11:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.023 21:11:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.023 21:11:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.023 21:11:46 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.023 21:11:46 -- accel/accel.sh@42 -- # jq -r . 00:08:12.023 [2024-07-26 21:11:46.525118] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:12.023 [2024-07-26 21:11:46.525185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526046 ] 00:08:12.023 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.023 [2024-07-26 21:11:46.610063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.023 [2024-07-26 21:11:46.646983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.023 [2024-07-26 21:11:46.647081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.023 [2024-07-26 21:11:46.647142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.023 [2024-07-26 21:11:46.647144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=0xf 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=decompress 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=software 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@23 -- # accel_module=software 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=32 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=32 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=1 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val=Yes 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.023 21:11:46 -- accel/accel.sh@21 -- # val= 00:08:12.023 21:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # IFS=: 00:08:12.023 21:11:46 -- accel/accel.sh@20 -- # read -r var val 00:08:12.960 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:12.960 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.960 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:12.960 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:12.960 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:12.960 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.960 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:12.960 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:12.960 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:12.960 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.960 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:12.960 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:12.960 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:12.960 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.960 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:13.219 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:13.219 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:13.219 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:13.219 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:13.219 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:13.219 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:13.219 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:13.219 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:13.219 21:11:47 -- accel/accel.sh@21 -- # val= 00:08:13.219 21:11:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # IFS=: 00:08:13.219 21:11:47 -- accel/accel.sh@20 -- # read -r var val 00:08:13.219 21:11:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.219 21:11:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:13.219 21:11:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.219 00:08:13.219 real 0m2.654s 00:08:13.219 user 0m9.063s 00:08:13.219 sys 0m0.299s 00:08:13.219 21:11:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.219 21:11:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.219 ************************************ 00:08:13.219 END TEST accel_decomp_full_mcore 00:08:13.219 ************************************ 00:08:13.219 21:11:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:13.219 21:11:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:13.219 21:11:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.219 21:11:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.219 ************************************ 00:08:13.219 START TEST accel_decomp_mthread 00:08:13.219 ************************************ 00:08:13.219 21:11:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:13.219 21:11:47 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.219 21:11:47 -- accel/accel.sh@17 -- # local accel_module 00:08:13.219 21:11:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:13.219 21:11:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:13.219 21:11:47 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.219 21:11:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.219 21:11:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.219 21:11:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.219 21:11:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.219 21:11:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.219 21:11:47 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.219 21:11:47 -- accel/accel.sh@42 -- # jq -r . 00:08:13.219 [2024-07-26 21:11:47.912113] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:13.219 [2024-07-26 21:11:47.912190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526335 ] 00:08:13.219 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.219 [2024-07-26 21:11:47.996519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.219 [2024-07-26 21:11:48.030610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.595 21:11:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:14.595 00:08:14.595 SPDK Configuration: 00:08:14.595 Core mask: 0x1 00:08:14.595 00:08:14.595 Accel Perf Configuration: 00:08:14.595 Workload Type: decompress 00:08:14.595 Transfer size: 4096 bytes 00:08:14.595 Vector count 1 00:08:14.595 Module: software 00:08:14.595 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:14.595 Queue depth: 32 00:08:14.595 Allocate depth: 32 00:08:14.595 # threads/core: 2 00:08:14.595 Run time: 1 seconds 00:08:14.595 Verify: Yes 00:08:14.595 00:08:14.595 Running for 1 seconds... 00:08:14.595 00:08:14.595 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:14.595 ------------------------------------------------------------------------------------ 00:08:14.595 0,1 42272/s 77 MiB/s 0 0 00:08:14.595 0,0 42112/s 77 MiB/s 0 0 00:08:14.595 ==================================================================================== 00:08:14.595 Total 84384/s 329 MiB/s 0 0' 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:14.595 21:11:49 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.595 21:11:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:14.595 21:11:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.595 21:11:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.595 21:11:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.595 21:11:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.595 21:11:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.595 21:11:49 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.595 21:11:49 -- accel/accel.sh@42 -- # jq -r . 00:08:14.595 [2024-07-26 21:11:49.227029] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:14.595 [2024-07-26 21:11:49.227095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526607 ] 00:08:14.595 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.595 [2024-07-26 21:11:49.310439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.595 [2024-07-26 21:11:49.344862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=0x1 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=decompress 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=software 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@23 -- # accel_module=software 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=32 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=32 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=2 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val=Yes 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:14.595 21:11:49 -- accel/accel.sh@21 -- # val= 00:08:14.595 21:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # IFS=: 00:08:14.595 21:11:49 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@21 -- # val= 00:08:15.973 21:11:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # IFS=: 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@21 -- # val= 00:08:15.973 21:11:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # IFS=: 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@21 -- # val= 00:08:15.973 21:11:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # IFS=: 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@21 -- # val= 00:08:15.973 21:11:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # IFS=: 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@21 -- # val= 00:08:15.973 21:11:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # IFS=: 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@21 -- # val= 00:08:15.973 21:11:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # IFS=: 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@21 -- # val= 00:08:15.973 21:11:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # IFS=: 00:08:15.973 21:11:50 -- accel/accel.sh@20 -- # read -r var val 00:08:15.973 21:11:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:15.973 21:11:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:15.973 21:11:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.973 00:08:15.973 real 0m2.640s 00:08:15.973 user 0m2.362s 00:08:15.973 sys 0m0.286s 00:08:15.973 21:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.973 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 ************************************ 00:08:15.973 END TEST accel_decomp_mthread 00:08:15.973 ************************************ 00:08:15.973 21:11:50 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.973 21:11:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:15.973 21:11:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.973 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 ************************************ 00:08:15.973 START TEST accel_deomp_full_mthread 00:08:15.973 ************************************ 00:08:15.973 21:11:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.973 21:11:50 -- accel/accel.sh@16 -- # local accel_opc 00:08:15.973 21:11:50 -- accel/accel.sh@17 -- # local accel_module 00:08:15.973 21:11:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.973 21:11:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.973 21:11:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.973 21:11:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.973 21:11:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.973 21:11:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.973 21:11:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.973 21:11:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.973 21:11:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.973 21:11:50 -- accel/accel.sh@42 -- # jq -r . 00:08:15.973 [2024-07-26 21:11:50.598897] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:15.973 [2024-07-26 21:11:50.598985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526890 ] 00:08:15.973 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.973 [2024-07-26 21:11:50.683659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.973 [2024-07-26 21:11:50.718778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.350 21:11:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:17.350 00:08:17.350 SPDK Configuration: 00:08:17.350 Core mask: 0x1 00:08:17.350 00:08:17.350 Accel Perf Configuration: 00:08:17.350 Workload Type: decompress 00:08:17.350 Transfer size: 111250 bytes 00:08:17.350 Vector count 1 00:08:17.350 Module: software 00:08:17.350 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:17.350 Queue depth: 32 00:08:17.350 Allocate depth: 32 00:08:17.350 # threads/core: 2 00:08:17.350 Run time: 1 seconds 00:08:17.350 Verify: Yes 00:08:17.350 00:08:17.350 Running for 1 seconds... 00:08:17.350 00:08:17.350 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:17.350 ------------------------------------------------------------------------------------ 00:08:17.350 0,1 2976/s 122 MiB/s 0 0 00:08:17.350 0,0 2944/s 121 MiB/s 0 0 00:08:17.350 ==================================================================================== 00:08:17.350 Total 5920/s 628 MiB/s 0 0' 00:08:17.350 21:11:51 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:51 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:17.350 21:11:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.350 21:11:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:17.350 21:11:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:17.350 21:11:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.350 21:11:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.350 21:11:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:17.350 21:11:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:17.350 21:11:51 -- accel/accel.sh@41 -- # local IFS=, 00:08:17.350 21:11:51 -- accel/accel.sh@42 -- # jq -r . 00:08:17.350 [2024-07-26 21:11:51.937147] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:17.350 [2024-07-26 21:11:51.937217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527089 ] 00:08:17.350 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.350 [2024-07-26 21:11:52.020630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.350 [2024-07-26 21:11:52.055408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=0x1 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=decompress 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=software 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@23 -- # accel_module=software 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=32 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=32 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=2 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val=Yes 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:17.350 21:11:52 -- accel/accel.sh@21 -- # val= 00:08:17.350 21:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # IFS=: 00:08:17.350 21:11:52 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@21 -- # val= 00:08:18.725 21:11:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # IFS=: 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@21 -- # val= 00:08:18.725 21:11:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # IFS=: 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@21 -- # val= 00:08:18.725 21:11:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # IFS=: 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@21 -- # val= 00:08:18.725 21:11:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # IFS=: 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@21 -- # val= 00:08:18.725 21:11:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # IFS=: 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@21 -- # val= 00:08:18.725 21:11:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # IFS=: 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@21 -- # val= 00:08:18.725 21:11:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # IFS=: 00:08:18.725 21:11:53 -- accel/accel.sh@20 -- # read -r var val 00:08:18.725 21:11:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:18.725 21:11:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:18.725 21:11:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.725 00:08:18.725 real 0m2.681s 00:08:18.725 user 0m2.391s 00:08:18.725 sys 0m0.298s 00:08:18.725 21:11:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.725 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:08:18.725 ************************************ 00:08:18.725 END TEST accel_deomp_full_mthread 00:08:18.725 ************************************ 00:08:18.725 21:11:53 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:18.725 21:11:53 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.725 21:11:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:18.725 21:11:53 -- accel/accel.sh@129 -- # build_accel_config 00:08:18.725 21:11:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.725 21:11:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.725 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:08:18.725 21:11:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.725 21:11:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.725 21:11:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.725 21:11:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.725 21:11:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.725 21:11:53 -- accel/accel.sh@42 -- # jq -r . 00:08:18.725 ************************************ 00:08:18.725 START TEST accel_dif_functional_tests 00:08:18.725 ************************************ 00:08:18.725 21:11:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.725 [2024-07-26 21:11:53.343256] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:18.725 [2024-07-26 21:11:53.343309] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527303 ] 00:08:18.725 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.725 [2024-07-26 21:11:53.426967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.725 [2024-07-26 21:11:53.464242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.725 [2024-07-26 21:11:53.464340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.725 [2024-07-26 21:11:53.464343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.725 00:08:18.725 00:08:18.725 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.725 http://cunit.sourceforge.net/ 00:08:18.725 00:08:18.725 00:08:18.725 Suite: accel_dif 00:08:18.725 Test: verify: DIF generated, GUARD check ...passed 00:08:18.725 Test: verify: DIF generated, APPTAG check ...passed 00:08:18.725 Test: verify: DIF generated, REFTAG check ...passed 00:08:18.725 Test: verify: DIF not generated, GUARD check ...[2024-07-26 21:11:53.527428] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:18.725 [2024-07-26 21:11:53.527473] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:18.725 passed 00:08:18.725 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 21:11:53.527504] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:18.725 [2024-07-26 21:11:53.527520] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:18.725 passed 00:08:18.725 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 21:11:53.527538] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:18.725 [2024-07-26 21:11:53.527554] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:18.725 passed 00:08:18.725 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:18.725 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-26 21:11:53.527597] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:18.725 passed 00:08:18.725 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:18.725 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:18.725 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:18.725 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 21:11:53.527703] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:18.725 passed 00:08:18.725 Test: generate copy: DIF generated, GUARD check ...passed 00:08:18.725 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:18.725 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:18.725 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:18.725 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:18.725 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:18.725 Test: generate copy: iovecs-len validate ...[2024-07-26 21:11:53.527868] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:18.725 passed 00:08:18.725 Test: generate copy: buffer alignment validate ...passed 00:08:18.725 00:08:18.725 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.725 suites 1 1 n/a 0 0 00:08:18.725 tests 20 20 20 0 0 00:08:18.725 asserts 204 204 204 0 n/a 00:08:18.725 00:08:18.725 Elapsed time = 0.002 seconds 00:08:18.984 00:08:18.984 real 0m0.384s 00:08:18.984 user 0m0.552s 00:08:18.984 sys 0m0.175s 00:08:18.984 21:11:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.984 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:08:18.984 ************************************ 00:08:18.984 END TEST accel_dif_functional_tests 00:08:18.984 ************************************ 00:08:18.984 00:08:18.984 real 0m56.258s 00:08:18.984 user 1m3.348s 00:08:18.984 sys 0m7.540s 00:08:18.984 21:11:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.984 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:08:18.984 ************************************ 00:08:18.984 END TEST accel 00:08:18.984 ************************************ 00:08:18.984 21:11:53 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:18.984 21:11:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:18.984 21:11:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.984 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:08:18.984 ************************************ 00:08:18.984 START TEST accel_rpc 00:08:18.984 ************************************ 00:08:18.984 21:11:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:19.243 * Looking for test storage... 00:08:19.243 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:19.243 21:11:53 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:19.243 21:11:53 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1527509 00:08:19.243 21:11:53 -- accel/accel_rpc.sh@15 -- # waitforlisten 1527509 00:08:19.243 21:11:53 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:19.243 21:11:53 -- common/autotest_common.sh@819 -- # '[' -z 1527509 ']' 00:08:19.243 21:11:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.243 21:11:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:19.243 21:11:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.243 21:11:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:19.243 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:08:19.243 [2024-07-26 21:11:53.926729] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:19.243 [2024-07-26 21:11:53.926821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527509 ] 00:08:19.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.243 [2024-07-26 21:11:54.011484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.243 [2024-07-26 21:11:54.049083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.243 [2024-07-26 21:11:54.049209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.180 21:11:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:20.180 21:11:54 -- common/autotest_common.sh@852 -- # return 0 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:20.180 21:11:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.180 21:11:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.180 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 ************************************ 00:08:20.180 START TEST accel_assign_opcode 00:08:20.180 ************************************ 00:08:20.180 21:11:54 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:20.180 21:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.180 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 [2024-07-26 21:11:54.719187] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:20.180 21:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:20.180 21:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.180 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 [2024-07-26 21:11:54.727203] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:20.180 21:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:20.180 21:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.180 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 21:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:20.180 21:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.180 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@42 -- # grep software 00:08:20.180 21:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.180 software 00:08:20.180 00:08:20.180 real 0m0.228s 00:08:20.180 user 0m0.042s 00:08:20.180 sys 0m0.011s 00:08:20.180 21:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.180 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 ************************************ 00:08:20.180 END TEST accel_assign_opcode 00:08:20.180 ************************************ 00:08:20.180 21:11:54 -- accel/accel_rpc.sh@55 -- # killprocess 1527509 00:08:20.180 21:11:54 -- common/autotest_common.sh@926 -- # '[' -z 1527509 ']' 00:08:20.180 21:11:54 -- common/autotest_common.sh@930 -- # kill -0 1527509 00:08:20.180 21:11:54 -- common/autotest_common.sh@931 -- # uname 00:08:20.180 21:11:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:20.180 21:11:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1527509 00:08:20.180 21:11:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:20.180 21:11:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:20.180 21:11:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1527509' 00:08:20.180 killing process with pid 1527509 00:08:20.180 21:11:55 -- common/autotest_common.sh@945 -- # kill 1527509 00:08:20.180 21:11:55 -- common/autotest_common.sh@950 -- # wait 1527509 00:08:20.748 00:08:20.748 real 0m1.550s 00:08:20.748 user 0m1.543s 00:08:20.748 sys 0m0.474s 00:08:20.748 21:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.748 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:08:20.748 ************************************ 00:08:20.748 END TEST accel_rpc 00:08:20.748 ************************************ 00:08:20.748 21:11:55 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:20.748 21:11:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.748 21:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.748 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:08:20.748 ************************************ 00:08:20.748 START TEST app_cmdline 00:08:20.748 ************************************ 00:08:20.748 21:11:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:20.748 * Looking for test storage... 00:08:20.748 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:20.748 21:11:55 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:20.748 21:11:55 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1527852 00:08:20.748 21:11:55 -- app/cmdline.sh@18 -- # waitforlisten 1527852 00:08:20.748 21:11:55 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:20.748 21:11:55 -- common/autotest_common.sh@819 -- # '[' -z 1527852 ']' 00:08:20.748 21:11:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.748 21:11:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:20.748 21:11:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.748 21:11:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:20.748 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:08:20.748 [2024-07-26 21:11:55.519266] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:20.749 [2024-07-26 21:11:55.519323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527852 ] 00:08:20.749 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.749 [2024-07-26 21:11:55.602730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.007 [2024-07-26 21:11:55.640391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.007 [2024-07-26 21:11:55.640501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.574 21:11:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:21.574 21:11:56 -- common/autotest_common.sh@852 -- # return 0 00:08:21.574 21:11:56 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:21.890 { 00:08:21.890 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:08:21.890 "fields": { 00:08:21.890 "major": 24, 00:08:21.890 "minor": 1, 00:08:21.890 "patch": 1, 00:08:21.890 "suffix": "-pre", 00:08:21.890 "commit": "dbef7efac" 00:08:21.890 } 00:08:21.890 } 00:08:21.890 21:11:56 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:21.890 21:11:56 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:21.890 21:11:56 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:21.890 21:11:56 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:21.890 21:11:56 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:21.890 21:11:56 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:21.890 21:11:56 -- app/cmdline.sh@26 -- # sort 00:08:21.890 21:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.890 21:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:21.890 21:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.890 21:11:56 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:21.890 21:11:56 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:21.890 21:11:56 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.890 21:11:56 -- common/autotest_common.sh@640 -- # local es=0 00:08:21.890 21:11:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.890 21:11:56 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.890 21:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.890 21:11:56 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.890 21:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.890 21:11:56 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.890 21:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.890 21:11:56 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.890 21:11:56 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:21.890 21:11:56 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.890 request: 00:08:21.890 { 00:08:21.890 "method": "env_dpdk_get_mem_stats", 00:08:21.890 "req_id": 1 00:08:21.890 } 00:08:21.890 Got JSON-RPC error response 00:08:21.890 response: 00:08:21.890 { 00:08:21.890 "code": -32601, 00:08:21.890 "message": "Method not found" 00:08:21.890 } 00:08:21.890 21:11:56 -- common/autotest_common.sh@643 -- # es=1 00:08:21.890 21:11:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:21.890 21:11:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:21.890 21:11:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:21.890 21:11:56 -- app/cmdline.sh@1 -- # killprocess 1527852 00:08:21.890 21:11:56 -- common/autotest_common.sh@926 -- # '[' -z 1527852 ']' 00:08:21.890 21:11:56 -- common/autotest_common.sh@930 -- # kill -0 1527852 00:08:21.890 21:11:56 -- common/autotest_common.sh@931 -- # uname 00:08:21.890 21:11:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:21.890 21:11:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1527852 00:08:21.890 21:11:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:21.890 21:11:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:21.890 21:11:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1527852' 00:08:21.890 killing process with pid 1527852 00:08:21.890 21:11:56 -- common/autotest_common.sh@945 -- # kill 1527852 00:08:21.890 21:11:56 -- common/autotest_common.sh@950 -- # wait 1527852 00:08:22.184 00:08:22.184 real 0m1.660s 00:08:22.184 user 0m1.915s 00:08:22.184 sys 0m0.489s 00:08:22.184 21:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.184 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.184 ************************************ 00:08:22.184 END TEST app_cmdline 00:08:22.184 ************************************ 00:08:22.443 21:11:57 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:22.443 21:11:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.443 21:11:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.443 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.443 ************************************ 00:08:22.443 START TEST version 00:08:22.443 ************************************ 00:08:22.443 21:11:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:22.443 * Looking for test storage... 00:08:22.443 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:22.443 21:11:57 -- app/version.sh@17 -- # get_header_version major 00:08:22.443 21:11:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:22.444 21:11:57 -- app/version.sh@14 -- # cut -f2 00:08:22.444 21:11:57 -- app/version.sh@14 -- # tr -d '"' 00:08:22.444 21:11:57 -- app/version.sh@17 -- # major=24 00:08:22.444 21:11:57 -- app/version.sh@18 -- # get_header_version minor 00:08:22.444 21:11:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:22.444 21:11:57 -- app/version.sh@14 -- # cut -f2 00:08:22.444 21:11:57 -- app/version.sh@14 -- # tr -d '"' 00:08:22.444 21:11:57 -- app/version.sh@18 -- # minor=1 00:08:22.444 21:11:57 -- app/version.sh@19 -- # get_header_version patch 00:08:22.444 21:11:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:22.444 21:11:57 -- app/version.sh@14 -- # cut -f2 00:08:22.444 21:11:57 -- app/version.sh@14 -- # tr -d '"' 00:08:22.444 21:11:57 -- app/version.sh@19 -- # patch=1 00:08:22.444 21:11:57 -- app/version.sh@20 -- # get_header_version suffix 00:08:22.444 21:11:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:22.444 21:11:57 -- app/version.sh@14 -- # cut -f2 00:08:22.444 21:11:57 -- app/version.sh@14 -- # tr -d '"' 00:08:22.444 21:11:57 -- app/version.sh@20 -- # suffix=-pre 00:08:22.444 21:11:57 -- app/version.sh@22 -- # version=24.1 00:08:22.444 21:11:57 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:22.444 21:11:57 -- app/version.sh@25 -- # version=24.1.1 00:08:22.444 21:11:57 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:22.444 21:11:57 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:22.444 21:11:57 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:22.444 21:11:57 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:22.444 21:11:57 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:22.444 00:08:22.444 real 0m0.155s 00:08:22.444 user 0m0.063s 00:08:22.444 sys 0m0.134s 00:08:22.444 21:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.444 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.444 ************************************ 00:08:22.444 END TEST version 00:08:22.444 ************************************ 00:08:22.444 21:11:57 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:22.444 21:11:57 -- spdk/autotest.sh@204 -- # uname -s 00:08:22.444 21:11:57 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:22.444 21:11:57 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:22.444 21:11:57 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:22.444 21:11:57 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:08:22.444 21:11:57 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:08:22.444 21:11:57 -- spdk/autotest.sh@268 -- # timing_exit lib 00:08:22.444 21:11:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:22.444 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.703 21:11:57 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:22.703 21:11:57 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:08:22.703 21:11:57 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:08:22.703 21:11:57 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:08:22.703 21:11:57 -- spdk/autotest.sh@291 -- # '[' rdma = rdma ']' 00:08:22.703 21:11:57 -- spdk/autotest.sh@292 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:22.703 21:11:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:22.703 21:11:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.703 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.703 ************************************ 00:08:22.703 START TEST nvmf_rdma 00:08:22.703 ************************************ 00:08:22.704 21:11:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:22.704 * Looking for test storage... 00:08:22.704 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.704 21:11:57 -- nvmf/common.sh@7 -- # uname -s 00:08:22.704 21:11:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.704 21:11:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.704 21:11:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.704 21:11:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.704 21:11:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.704 21:11:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.704 21:11:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.704 21:11:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.704 21:11:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.704 21:11:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.704 21:11:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:22.704 21:11:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:22.704 21:11:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.704 21:11:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.704 21:11:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.704 21:11:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:22.704 21:11:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.704 21:11:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.704 21:11:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.704 21:11:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- paths/export.sh@5 -- # export PATH 00:08:22.704 21:11:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- nvmf/common.sh@46 -- # : 0 00:08:22.704 21:11:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:22.704 21:11:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:22.704 21:11:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:22.704 21:11:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.704 21:11:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.704 21:11:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:22.704 21:11:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:22.704 21:11:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:22.704 21:11:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:22.704 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:22.704 21:11:57 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:22.704 21:11:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:22.704 21:11:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.704 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 ************************************ 00:08:22.704 START TEST nvmf_example 00:08:22.704 ************************************ 00:08:22.704 21:11:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:22.704 * Looking for test storage... 00:08:22.704 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:22.704 21:11:57 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.704 21:11:57 -- nvmf/common.sh@7 -- # uname -s 00:08:22.704 21:11:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.704 21:11:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.704 21:11:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.704 21:11:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.704 21:11:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.704 21:11:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.704 21:11:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.704 21:11:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.704 21:11:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.704 21:11:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.704 21:11:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:22.704 21:11:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:22.704 21:11:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.704 21:11:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.704 21:11:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.704 21:11:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:22.704 21:11:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.704 21:11:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.704 21:11:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.704 21:11:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- paths/export.sh@5 -- # export PATH 00:08:22.704 21:11:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.704 21:11:57 -- nvmf/common.sh@46 -- # : 0 00:08:22.704 21:11:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:22.704 21:11:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:22.704 21:11:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:22.704 21:11:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.704 21:11:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.704 21:11:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:22.704 21:11:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:22.704 21:11:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:22.704 21:11:57 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:22.704 21:11:57 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:22.704 21:11:57 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:22.704 21:11:57 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:22.704 21:11:57 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:22.704 21:11:57 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:22.704 21:11:57 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:22.704 21:11:57 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:22.704 21:11:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:22.704 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 21:11:57 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:22.704 21:11:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:22.704 21:11:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.705 21:11:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:22.705 21:11:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:22.705 21:11:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:22.705 21:11:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.705 21:11:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.705 21:11:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.964 21:11:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:22.964 21:11:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:22.964 21:11:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:22.964 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:08:31.085 21:12:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:31.085 21:12:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:31.085 21:12:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:31.085 21:12:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:31.085 21:12:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:31.085 21:12:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:31.085 21:12:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:31.085 21:12:05 -- nvmf/common.sh@294 -- # net_devs=() 00:08:31.085 21:12:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:31.085 21:12:05 -- nvmf/common.sh@295 -- # e810=() 00:08:31.085 21:12:05 -- nvmf/common.sh@295 -- # local -ga e810 00:08:31.085 21:12:05 -- nvmf/common.sh@296 -- # x722=() 00:08:31.085 21:12:05 -- nvmf/common.sh@296 -- # local -ga x722 00:08:31.085 21:12:05 -- nvmf/common.sh@297 -- # mlx=() 00:08:31.085 21:12:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:31.085 21:12:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.085 21:12:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:31.086 21:12:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:31.086 21:12:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:31.086 21:12:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:31.086 21:12:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:31.086 21:12:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:31.086 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:31.086 21:12:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.086 21:12:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:31.086 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:31.086 21:12:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.086 21:12:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:31.086 21:12:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.086 21:12:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:31.086 21:12:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.086 21:12:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:31.086 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.086 21:12:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.086 21:12:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:31.086 21:12:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.086 21:12:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:31.086 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.086 21:12:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:31.086 21:12:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:31.086 21:12:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:31.086 21:12:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:31.086 21:12:05 -- nvmf/common.sh@57 -- # uname 00:08:31.086 21:12:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:31.086 21:12:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:31.086 21:12:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:31.086 21:12:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:31.086 21:12:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:31.086 21:12:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:31.086 21:12:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:31.086 21:12:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:31.086 21:12:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:31.086 21:12:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:31.086 21:12:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:31.086 21:12:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.086 21:12:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:31.086 21:12:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:31.086 21:12:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.086 21:12:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:31.086 21:12:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@104 -- # continue 2 00:08:31.086 21:12:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@104 -- # continue 2 00:08:31.086 21:12:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:31.086 21:12:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:31.086 21:12:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:31.086 21:12:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:31.086 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.086 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:31.086 altname enp217s0f0np0 00:08:31.086 altname ens818f0np0 00:08:31.086 inet 192.168.100.8/24 scope global mlx_0_0 00:08:31.086 valid_lft forever preferred_lft forever 00:08:31.086 21:12:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:31.086 21:12:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:31.086 21:12:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:31.086 21:12:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:31.086 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.086 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:31.086 altname enp217s0f1np1 00:08:31.086 altname ens818f1np1 00:08:31.086 inet 192.168.100.9/24 scope global mlx_0_1 00:08:31.086 valid_lft forever preferred_lft forever 00:08:31.086 21:12:05 -- nvmf/common.sh@410 -- # return 0 00:08:31.086 21:12:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:31.086 21:12:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:31.086 21:12:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:31.086 21:12:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:31.086 21:12:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.086 21:12:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:31.086 21:12:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:31.086 21:12:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.086 21:12:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:31.086 21:12:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@104 -- # continue 2 00:08:31.086 21:12:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.086 21:12:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.086 21:12:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@104 -- # continue 2 00:08:31.086 21:12:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:31.086 21:12:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:31.086 21:12:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:31.086 21:12:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:31.086 21:12:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:31.086 21:12:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:31.086 192.168.100.9' 00:08:31.086 21:12:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:31.086 192.168.100.9' 00:08:31.086 21:12:05 -- nvmf/common.sh@445 -- # head -n 1 00:08:31.086 21:12:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:31.086 21:12:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:31.086 192.168.100.9' 00:08:31.086 21:12:05 -- nvmf/common.sh@446 -- # tail -n +2 00:08:31.086 21:12:05 -- nvmf/common.sh@446 -- # head -n 1 00:08:31.086 21:12:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:31.086 21:12:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:31.086 21:12:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:31.086 21:12:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:31.086 21:12:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:31.087 21:12:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:31.087 21:12:05 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:31.087 21:12:05 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:31.087 21:12:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:31.087 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 21:12:05 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:31.087 21:12:05 -- target/nvmf_example.sh@34 -- # nvmfpid=1532762 00:08:31.087 21:12:05 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.087 21:12:05 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:31.087 21:12:05 -- target/nvmf_example.sh@36 -- # waitforlisten 1532762 00:08:31.087 21:12:05 -- common/autotest_common.sh@819 -- # '[' -z 1532762 ']' 00:08:31.087 21:12:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.087 21:12:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.087 21:12:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.087 21:12:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.087 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.654 21:12:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:31.654 21:12:06 -- common/autotest_common.sh@852 -- # return 0 00:08:31.654 21:12:06 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:31.654 21:12:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:31.654 21:12:06 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 21:12:06 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:31.913 21:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.913 21:12:06 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 21:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.913 21:12:06 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:31.913 21:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.913 21:12:06 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 21:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.913 21:12:06 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:31.913 21:12:06 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.913 21:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.913 21:12:06 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 21:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.913 21:12:06 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:31.913 21:12:06 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.913 21:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.913 21:12:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.171 21:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.171 21:12:06 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:32.171 21:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.171 21:12:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.171 21:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.171 21:12:06 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:32.171 21:12:06 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:32.172 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.386 Initializing NVMe Controllers 00:08:44.386 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:44.386 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:44.386 Initialization complete. Launching workers. 00:08:44.386 ======================================================== 00:08:44.386 Latency(us) 00:08:44.386 Device Information : IOPS MiB/s Average min max 00:08:44.386 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 27699.57 108.20 2311.99 595.01 12985.21 00:08:44.386 ======================================================== 00:08:44.386 Total : 27699.57 108.20 2311.99 595.01 12985.21 00:08:44.386 00:08:44.386 21:12:18 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:44.386 21:12:18 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:44.386 21:12:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:44.386 21:12:18 -- nvmf/common.sh@116 -- # sync 00:08:44.386 21:12:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:44.386 21:12:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:44.386 21:12:18 -- nvmf/common.sh@119 -- # set +e 00:08:44.386 21:12:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:44.386 21:12:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:44.386 rmmod nvme_rdma 00:08:44.386 rmmod nvme_fabrics 00:08:44.386 21:12:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:44.386 21:12:18 -- nvmf/common.sh@123 -- # set -e 00:08:44.386 21:12:18 -- nvmf/common.sh@124 -- # return 0 00:08:44.386 21:12:18 -- nvmf/common.sh@477 -- # '[' -n 1532762 ']' 00:08:44.386 21:12:18 -- nvmf/common.sh@478 -- # killprocess 1532762 00:08:44.386 21:12:18 -- common/autotest_common.sh@926 -- # '[' -z 1532762 ']' 00:08:44.386 21:12:18 -- common/autotest_common.sh@930 -- # kill -0 1532762 00:08:44.386 21:12:18 -- common/autotest_common.sh@931 -- # uname 00:08:44.386 21:12:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:44.386 21:12:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1532762 00:08:44.386 21:12:18 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:44.386 21:12:18 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:44.386 21:12:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1532762' 00:08:44.386 killing process with pid 1532762 00:08:44.386 21:12:18 -- common/autotest_common.sh@945 -- # kill 1532762 00:08:44.386 21:12:18 -- common/autotest_common.sh@950 -- # wait 1532762 00:08:44.386 nvmf threads initialize successfully 00:08:44.386 bdev subsystem init successfully 00:08:44.386 created a nvmf target service 00:08:44.386 create targets's poll groups done 00:08:44.386 all subsystems of target started 00:08:44.386 nvmf target is running 00:08:44.386 all subsystems of target stopped 00:08:44.386 destroy targets's poll groups done 00:08:44.386 destroyed the nvmf target service 00:08:44.386 bdev subsystem finish successfully 00:08:44.386 nvmf threads destroy successfully 00:08:44.386 21:12:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:44.386 21:12:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:44.386 21:12:18 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:44.386 21:12:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:44.386 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:44.386 00:08:44.386 real 0m20.960s 00:08:44.386 user 0m52.282s 00:08:44.386 sys 0m6.667s 00:08:44.386 21:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.386 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:44.386 ************************************ 00:08:44.386 END TEST nvmf_example 00:08:44.386 ************************************ 00:08:44.386 21:12:18 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:44.386 21:12:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:44.386 21:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.386 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:44.386 ************************************ 00:08:44.386 START TEST nvmf_filesystem 00:08:44.386 ************************************ 00:08:44.386 21:12:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:44.386 * Looking for test storage... 00:08:44.386 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.386 21:12:18 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:44.386 21:12:18 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:44.386 21:12:18 -- common/autotest_common.sh@34 -- # set -e 00:08:44.386 21:12:18 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:44.386 21:12:18 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:44.386 21:12:18 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:44.386 21:12:18 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:44.386 21:12:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:44.386 21:12:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:44.386 21:12:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:44.386 21:12:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:44.386 21:12:18 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:44.386 21:12:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:44.386 21:12:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:44.386 21:12:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:44.386 21:12:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:44.386 21:12:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:44.386 21:12:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:44.386 21:12:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:44.386 21:12:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:44.386 21:12:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:44.386 21:12:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:44.386 21:12:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:44.386 21:12:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:44.386 21:12:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:44.386 21:12:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:44.386 21:12:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:44.386 21:12:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:44.386 21:12:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:44.386 21:12:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:44.386 21:12:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:44.386 21:12:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:44.386 21:12:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:44.386 21:12:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:44.387 21:12:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:44.387 21:12:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:44.387 21:12:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:44.387 21:12:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:44.387 21:12:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:44.387 21:12:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:44.387 21:12:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:44.387 21:12:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:44.387 21:12:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:44.387 21:12:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:44.387 21:12:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:44.387 21:12:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:44.387 21:12:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:44.387 21:12:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:44.387 21:12:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:44.387 21:12:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:44.387 21:12:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:44.387 21:12:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:44.387 21:12:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:44.387 21:12:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:44.387 21:12:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:44.387 21:12:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:44.387 21:12:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:44.387 21:12:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:44.387 21:12:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:44.387 21:12:18 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:44.387 21:12:18 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:44.387 21:12:18 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:44.387 21:12:18 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:44.387 21:12:18 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:44.387 21:12:18 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:44.387 21:12:18 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:44.387 21:12:18 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:44.387 21:12:18 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:44.387 21:12:18 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:44.387 21:12:18 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:44.387 21:12:18 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:44.387 21:12:18 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:44.387 21:12:18 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:44.387 21:12:18 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:44.387 21:12:18 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:44.387 21:12:18 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:44.387 21:12:18 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:44.387 21:12:18 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:44.387 21:12:18 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:44.387 21:12:18 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:44.387 21:12:18 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:44.387 21:12:18 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:44.387 21:12:18 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:44.387 21:12:18 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:44.387 21:12:18 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:44.387 21:12:18 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:44.387 21:12:18 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:44.387 21:12:18 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:44.387 21:12:18 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:44.387 21:12:18 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:44.387 21:12:18 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:44.387 21:12:18 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:44.387 21:12:18 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:44.387 21:12:18 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:44.387 21:12:18 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:44.387 21:12:18 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:44.387 21:12:18 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:44.387 21:12:18 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:44.387 21:12:18 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:44.387 21:12:18 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:44.387 21:12:18 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:44.387 21:12:18 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:44.387 #define SPDK_CONFIG_H 00:08:44.387 #define SPDK_CONFIG_APPS 1 00:08:44.387 #define SPDK_CONFIG_ARCH native 00:08:44.387 #undef SPDK_CONFIG_ASAN 00:08:44.387 #undef SPDK_CONFIG_AVAHI 00:08:44.387 #undef SPDK_CONFIG_CET 00:08:44.387 #define SPDK_CONFIG_COVERAGE 1 00:08:44.387 #define SPDK_CONFIG_CROSS_PREFIX 00:08:44.387 #undef SPDK_CONFIG_CRYPTO 00:08:44.387 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:44.387 #undef SPDK_CONFIG_CUSTOMOCF 00:08:44.387 #undef SPDK_CONFIG_DAOS 00:08:44.387 #define SPDK_CONFIG_DAOS_DIR 00:08:44.387 #define SPDK_CONFIG_DEBUG 1 00:08:44.387 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:44.387 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:44.387 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:44.387 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:44.387 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:44.387 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:44.387 #define SPDK_CONFIG_EXAMPLES 1 00:08:44.387 #undef SPDK_CONFIG_FC 00:08:44.387 #define SPDK_CONFIG_FC_PATH 00:08:44.387 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:44.387 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:44.387 #undef SPDK_CONFIG_FUSE 00:08:44.387 #undef SPDK_CONFIG_FUZZER 00:08:44.387 #define SPDK_CONFIG_FUZZER_LIB 00:08:44.387 #undef SPDK_CONFIG_GOLANG 00:08:44.387 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:44.387 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:44.387 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:44.387 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:44.387 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:44.387 #define SPDK_CONFIG_IDXD 1 00:08:44.387 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:44.387 #undef SPDK_CONFIG_IPSEC_MB 00:08:44.387 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:44.387 #define SPDK_CONFIG_ISAL 1 00:08:44.387 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:44.387 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:44.387 #define SPDK_CONFIG_LIBDIR 00:08:44.387 #undef SPDK_CONFIG_LTO 00:08:44.387 #define SPDK_CONFIG_MAX_LCORES 00:08:44.387 #define SPDK_CONFIG_NVME_CUSE 1 00:08:44.387 #undef SPDK_CONFIG_OCF 00:08:44.387 #define SPDK_CONFIG_OCF_PATH 00:08:44.387 #define SPDK_CONFIG_OPENSSL_PATH 00:08:44.387 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:44.387 #undef SPDK_CONFIG_PGO_USE 00:08:44.387 #define SPDK_CONFIG_PREFIX /usr/local 00:08:44.387 #undef SPDK_CONFIG_RAID5F 00:08:44.387 #undef SPDK_CONFIG_RBD 00:08:44.387 #define SPDK_CONFIG_RDMA 1 00:08:44.387 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:44.387 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:44.387 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:44.387 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:44.387 #define SPDK_CONFIG_SHARED 1 00:08:44.387 #undef SPDK_CONFIG_SMA 00:08:44.387 #define SPDK_CONFIG_TESTS 1 00:08:44.387 #undef SPDK_CONFIG_TSAN 00:08:44.387 #define SPDK_CONFIG_UBLK 1 00:08:44.387 #define SPDK_CONFIG_UBSAN 1 00:08:44.387 #undef SPDK_CONFIG_UNIT_TESTS 00:08:44.387 #undef SPDK_CONFIG_URING 00:08:44.387 #define SPDK_CONFIG_URING_PATH 00:08:44.387 #undef SPDK_CONFIG_URING_ZNS 00:08:44.387 #undef SPDK_CONFIG_USDT 00:08:44.387 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:44.387 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:44.387 #undef SPDK_CONFIG_VFIO_USER 00:08:44.387 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:44.387 #define SPDK_CONFIG_VHOST 1 00:08:44.387 #define SPDK_CONFIG_VIRTIO 1 00:08:44.387 #undef SPDK_CONFIG_VTUNE 00:08:44.387 #define SPDK_CONFIG_VTUNE_DIR 00:08:44.387 #define SPDK_CONFIG_WERROR 1 00:08:44.387 #define SPDK_CONFIG_WPDK_DIR 00:08:44.387 #undef SPDK_CONFIG_XNVME 00:08:44.387 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:44.387 21:12:18 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:44.387 21:12:18 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:44.387 21:12:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.387 21:12:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.387 21:12:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.387 21:12:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.388 21:12:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.388 21:12:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.388 21:12:18 -- paths/export.sh@5 -- # export PATH 00:08:44.388 21:12:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.388 21:12:18 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:44.388 21:12:18 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:44.388 21:12:18 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:44.388 21:12:18 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:44.388 21:12:18 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:44.388 21:12:18 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:44.388 21:12:18 -- pm/common@16 -- # TEST_TAG=N/A 00:08:44.388 21:12:18 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:44.388 21:12:18 -- common/autotest_common.sh@52 -- # : 1 00:08:44.388 21:12:18 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:44.388 21:12:18 -- common/autotest_common.sh@56 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:44.388 21:12:18 -- common/autotest_common.sh@58 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:44.388 21:12:18 -- common/autotest_common.sh@60 -- # : 1 00:08:44.388 21:12:18 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:44.388 21:12:18 -- common/autotest_common.sh@62 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:44.388 21:12:18 -- common/autotest_common.sh@64 -- # : 00:08:44.388 21:12:18 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:44.388 21:12:18 -- common/autotest_common.sh@66 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:44.388 21:12:18 -- common/autotest_common.sh@68 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:44.388 21:12:18 -- common/autotest_common.sh@70 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:44.388 21:12:18 -- common/autotest_common.sh@72 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:44.388 21:12:18 -- common/autotest_common.sh@74 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:44.388 21:12:18 -- common/autotest_common.sh@76 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:44.388 21:12:18 -- common/autotest_common.sh@78 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:44.388 21:12:18 -- common/autotest_common.sh@80 -- # : 1 00:08:44.388 21:12:18 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:44.388 21:12:18 -- common/autotest_common.sh@82 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:44.388 21:12:18 -- common/autotest_common.sh@84 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:44.388 21:12:18 -- common/autotest_common.sh@86 -- # : 1 00:08:44.388 21:12:18 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:44.388 21:12:18 -- common/autotest_common.sh@88 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:44.388 21:12:18 -- common/autotest_common.sh@90 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:44.388 21:12:18 -- common/autotest_common.sh@92 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:44.388 21:12:18 -- common/autotest_common.sh@94 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:44.388 21:12:18 -- common/autotest_common.sh@96 -- # : rdma 00:08:44.388 21:12:18 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:44.388 21:12:18 -- common/autotest_common.sh@98 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:44.388 21:12:18 -- common/autotest_common.sh@100 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:44.388 21:12:18 -- common/autotest_common.sh@102 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:44.388 21:12:18 -- common/autotest_common.sh@104 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:44.388 21:12:18 -- common/autotest_common.sh@106 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:44.388 21:12:18 -- common/autotest_common.sh@108 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:44.388 21:12:18 -- common/autotest_common.sh@110 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:44.388 21:12:18 -- common/autotest_common.sh@112 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:44.388 21:12:18 -- common/autotest_common.sh@114 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:44.388 21:12:18 -- common/autotest_common.sh@116 -- # : 1 00:08:44.388 21:12:18 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:44.388 21:12:18 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:44.388 21:12:18 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:44.388 21:12:18 -- common/autotest_common.sh@120 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:44.388 21:12:18 -- common/autotest_common.sh@122 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:44.388 21:12:18 -- common/autotest_common.sh@124 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:44.388 21:12:18 -- common/autotest_common.sh@126 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:44.388 21:12:18 -- common/autotest_common.sh@128 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:44.388 21:12:18 -- common/autotest_common.sh@130 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:44.388 21:12:18 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:44.388 21:12:18 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:44.388 21:12:18 -- common/autotest_common.sh@134 -- # : true 00:08:44.388 21:12:18 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:44.388 21:12:18 -- common/autotest_common.sh@136 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:44.388 21:12:18 -- common/autotest_common.sh@138 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:44.388 21:12:18 -- common/autotest_common.sh@140 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:44.388 21:12:18 -- common/autotest_common.sh@142 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:44.388 21:12:18 -- common/autotest_common.sh@144 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:44.388 21:12:18 -- common/autotest_common.sh@146 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:44.388 21:12:18 -- common/autotest_common.sh@148 -- # : mlx5 00:08:44.388 21:12:18 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:44.388 21:12:18 -- common/autotest_common.sh@150 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:44.388 21:12:18 -- common/autotest_common.sh@152 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:44.388 21:12:18 -- common/autotest_common.sh@154 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:44.388 21:12:18 -- common/autotest_common.sh@156 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:44.388 21:12:18 -- common/autotest_common.sh@158 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:44.388 21:12:18 -- common/autotest_common.sh@160 -- # : 0 00:08:44.388 21:12:18 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:44.389 21:12:18 -- common/autotest_common.sh@163 -- # : 00:08:44.389 21:12:18 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:44.389 21:12:18 -- common/autotest_common.sh@165 -- # : 0 00:08:44.389 21:12:18 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:44.389 21:12:18 -- common/autotest_common.sh@167 -- # : 0 00:08:44.389 21:12:18 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:44.389 21:12:18 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.389 21:12:18 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:44.389 21:12:18 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:44.389 21:12:18 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:44.389 21:12:18 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:44.389 21:12:18 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:44.389 21:12:18 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:44.389 21:12:18 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:44.389 21:12:18 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:44.389 21:12:18 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:44.389 21:12:18 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:44.389 21:12:18 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:44.389 21:12:18 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:44.389 21:12:18 -- common/autotest_common.sh@196 -- # cat 00:08:44.389 21:12:18 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:44.389 21:12:18 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:44.389 21:12:18 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:44.389 21:12:18 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:44.389 21:12:18 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:44.389 21:12:18 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:44.389 21:12:18 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:44.389 21:12:18 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:44.389 21:12:18 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:44.389 21:12:18 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:44.389 21:12:18 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:44.389 21:12:18 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:44.389 21:12:18 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:44.389 21:12:18 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.389 21:12:18 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.389 21:12:18 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:44.389 21:12:18 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:44.389 21:12:18 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:44.389 21:12:18 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:44.389 21:12:18 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:44.389 21:12:18 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:44.389 21:12:18 -- common/autotest_common.sh@249 -- # valgrind= 00:08:44.389 21:12:18 -- common/autotest_common.sh@255 -- # uname -s 00:08:44.389 21:12:18 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:44.389 21:12:18 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:44.389 21:12:18 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:44.389 21:12:18 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:44.389 21:12:18 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:44.389 21:12:18 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:44.389 21:12:18 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:44.389 21:12:18 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:08:44.389 21:12:18 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:44.389 21:12:18 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:44.389 21:12:18 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:44.389 21:12:18 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:44.389 21:12:18 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:44.389 21:12:18 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:44.389 21:12:18 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:44.389 21:12:18 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=rdma 00:08:44.389 21:12:18 -- common/autotest_common.sh@309 -- # [[ -z 1535209 ]] 00:08:44.389 21:12:18 -- common/autotest_common.sh@309 -- # kill -0 1535209 00:08:44.389 21:12:18 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:44.389 21:12:18 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:44.389 21:12:18 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:44.389 21:12:18 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:44.389 21:12:18 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:44.389 21:12:18 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:44.389 21:12:18 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:44.389 21:12:18 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:44.389 21:12:18 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.BryaMk 00:08:44.389 21:12:18 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:44.389 21:12:18 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:44.389 21:12:18 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:44.389 21:12:18 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.BryaMk/tests/target /tmp/spdk.BryaMk 00:08:44.389 21:12:18 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:44.389 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.389 21:12:18 -- common/autotest_common.sh@318 -- # df -T 00:08:44.389 21:12:18 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:44.389 21:12:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:44.389 21:12:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:44.389 21:12:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:44.389 21:12:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:44.389 21:12:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:44.389 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.389 21:12:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:44.389 21:12:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:44.389 21:12:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=919109632 00:08:44.389 21:12:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:44.389 21:12:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=4365320192 00:08:44.389 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.389 21:12:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:44.389 21:12:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:44.389 21:12:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=49657696256 00:08:44.389 21:12:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61742276608 00:08:44.389 21:12:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=12084580352 00:08:44.389 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.389 21:12:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=30817619968 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:08:44.390 21:12:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:44.390 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.390 21:12:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=12338671616 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12348456960 00:08:44.390 21:12:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=9785344 00:08:44.390 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.390 21:12:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=30865592320 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:08:44.390 21:12:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=5545984 00:08:44.390 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.390 21:12:18 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # avails["$mount"]=6174220288 00:08:44.390 21:12:18 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6174224384 00:08:44.390 21:12:18 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:44.390 21:12:18 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:44.390 21:12:18 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:44.390 * Looking for test storage... 00:08:44.390 21:12:18 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:44.390 21:12:18 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:44.390 21:12:18 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.390 21:12:18 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:44.390 21:12:18 -- common/autotest_common.sh@363 -- # mount=/ 00:08:44.390 21:12:18 -- common/autotest_common.sh@365 -- # target_space=49657696256 00:08:44.390 21:12:18 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:44.390 21:12:18 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:44.390 21:12:18 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:44.390 21:12:18 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:44.390 21:12:18 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:44.390 21:12:18 -- common/autotest_common.sh@372 -- # new_size=14299172864 00:08:44.390 21:12:18 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:44.390 21:12:18 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.390 21:12:18 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.390 21:12:18 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.390 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.390 21:12:18 -- common/autotest_common.sh@380 -- # return 0 00:08:44.390 21:12:18 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:44.390 21:12:18 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:44.390 21:12:18 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:44.390 21:12:18 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:44.390 21:12:18 -- common/autotest_common.sh@1672 -- # true 00:08:44.390 21:12:18 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:44.390 21:12:18 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:44.390 21:12:18 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:44.390 21:12:18 -- common/autotest_common.sh@27 -- # exec 00:08:44.390 21:12:18 -- common/autotest_common.sh@29 -- # exec 00:08:44.390 21:12:18 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:44.390 21:12:18 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:44.390 21:12:18 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:44.390 21:12:18 -- common/autotest_common.sh@18 -- # set -x 00:08:44.390 21:12:18 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.390 21:12:18 -- nvmf/common.sh@7 -- # uname -s 00:08:44.390 21:12:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.390 21:12:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.390 21:12:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.390 21:12:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.390 21:12:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.390 21:12:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.390 21:12:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.390 21:12:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.390 21:12:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.390 21:12:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.390 21:12:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:44.390 21:12:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:44.390 21:12:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.390 21:12:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.390 21:12:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.390 21:12:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:44.390 21:12:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.390 21:12:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.390 21:12:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.390 21:12:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.390 21:12:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.390 21:12:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.390 21:12:18 -- paths/export.sh@5 -- # export PATH 00:08:44.390 21:12:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.390 21:12:18 -- nvmf/common.sh@46 -- # : 0 00:08:44.390 21:12:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.390 21:12:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.390 21:12:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.390 21:12:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.390 21:12:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.390 21:12:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.390 21:12:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.390 21:12:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.390 21:12:18 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:44.390 21:12:18 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:44.390 21:12:18 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:44.390 21:12:18 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:44.390 21:12:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.390 21:12:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:44.390 21:12:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:44.390 21:12:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:44.390 21:12:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.390 21:12:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.390 21:12:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.390 21:12:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:44.390 21:12:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:44.390 21:12:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:44.390 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:52.515 21:12:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:52.515 21:12:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:52.515 21:12:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:52.515 21:12:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:52.515 21:12:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:52.515 21:12:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:52.515 21:12:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:52.515 21:12:26 -- nvmf/common.sh@294 -- # net_devs=() 00:08:52.515 21:12:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:52.515 21:12:26 -- nvmf/common.sh@295 -- # e810=() 00:08:52.515 21:12:26 -- nvmf/common.sh@295 -- # local -ga e810 00:08:52.515 21:12:26 -- nvmf/common.sh@296 -- # x722=() 00:08:52.515 21:12:26 -- nvmf/common.sh@296 -- # local -ga x722 00:08:52.515 21:12:26 -- nvmf/common.sh@297 -- # mlx=() 00:08:52.515 21:12:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:52.515 21:12:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.515 21:12:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:52.515 21:12:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:52.515 21:12:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:52.515 21:12:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:52.515 21:12:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:52.515 21:12:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.515 21:12:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:52.515 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:52.515 21:12:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.515 21:12:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.515 21:12:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:52.515 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:52.515 21:12:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.515 21:12:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:52.515 21:12:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:52.515 21:12:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.515 21:12:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.515 21:12:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.515 21:12:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.515 21:12:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:52.515 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:52.515 21:12:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.515 21:12:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.515 21:12:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.515 21:12:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.515 21:12:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.516 21:12:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:52.516 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:52.516 21:12:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.516 21:12:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:52.516 21:12:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:52.516 21:12:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:52.516 21:12:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:52.516 21:12:26 -- nvmf/common.sh@57 -- # uname 00:08:52.516 21:12:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:52.516 21:12:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:52.516 21:12:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:52.516 21:12:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:52.516 21:12:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:52.516 21:12:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:52.516 21:12:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:52.516 21:12:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:52.516 21:12:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:52.516 21:12:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:52.516 21:12:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:52.516 21:12:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.516 21:12:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:52.516 21:12:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:52.516 21:12:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.516 21:12:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:52.516 21:12:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.516 21:12:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.516 21:12:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:52.516 21:12:26 -- nvmf/common.sh@104 -- # continue 2 00:08:52.516 21:12:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.516 21:12:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.516 21:12:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.516 21:12:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:52.516 21:12:26 -- nvmf/common.sh@104 -- # continue 2 00:08:52.516 21:12:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:52.516 21:12:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:52.516 21:12:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:52.516 21:12:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.516 21:12:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:52.516 21:12:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.516 21:12:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:52.516 21:12:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:52.516 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.516 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:52.516 altname enp217s0f0np0 00:08:52.516 altname ens818f0np0 00:08:52.516 inet 192.168.100.8/24 scope global mlx_0_0 00:08:52.516 valid_lft forever preferred_lft forever 00:08:52.516 21:12:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:52.516 21:12:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:52.516 21:12:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:52.516 21:12:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.516 21:12:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.516 21:12:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:52.516 21:12:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:52.516 21:12:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:52.516 21:12:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:52.516 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.516 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:52.516 altname enp217s0f1np1 00:08:52.516 altname ens818f1np1 00:08:52.516 inet 192.168.100.9/24 scope global mlx_0_1 00:08:52.516 valid_lft forever preferred_lft forever 00:08:52.516 21:12:26 -- nvmf/common.sh@410 -- # return 0 00:08:52.516 21:12:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:52.516 21:12:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:52.516 21:12:27 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:52.516 21:12:27 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:52.516 21:12:27 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:52.516 21:12:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.516 21:12:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:52.516 21:12:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:52.516 21:12:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.516 21:12:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:52.516 21:12:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.516 21:12:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.516 21:12:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.516 21:12:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:52.516 21:12:27 -- nvmf/common.sh@104 -- # continue 2 00:08:52.516 21:12:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.516 21:12:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.516 21:12:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.516 21:12:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.516 21:12:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.516 21:12:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:52.516 21:12:27 -- nvmf/common.sh@104 -- # continue 2 00:08:52.516 21:12:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:52.516 21:12:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:52.516 21:12:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:52.516 21:12:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:52.516 21:12:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.516 21:12:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.516 21:12:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:52.516 21:12:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:52.516 21:12:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:52.516 21:12:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:52.516 21:12:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.516 21:12:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.516 21:12:27 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:52.516 192.168.100.9' 00:08:52.516 21:12:27 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:52.516 192.168.100.9' 00:08:52.516 21:12:27 -- nvmf/common.sh@445 -- # head -n 1 00:08:52.516 21:12:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:52.516 21:12:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:52.516 192.168.100.9' 00:08:52.516 21:12:27 -- nvmf/common.sh@446 -- # tail -n +2 00:08:52.516 21:12:27 -- nvmf/common.sh@446 -- # head -n 1 00:08:52.516 21:12:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:52.516 21:12:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:52.516 21:12:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:52.516 21:12:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:52.516 21:12:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:52.516 21:12:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:52.516 21:12:27 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:52.516 21:12:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:52.516 21:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.516 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:08:52.516 ************************************ 00:08:52.516 START TEST nvmf_filesystem_no_in_capsule 00:08:52.516 ************************************ 00:08:52.516 21:12:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:52.516 21:12:27 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:52.516 21:12:27 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:52.516 21:12:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.516 21:12:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:52.516 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:08:52.516 21:12:27 -- nvmf/common.sh@469 -- # nvmfpid=1539136 00:08:52.516 21:12:27 -- nvmf/common.sh@470 -- # waitforlisten 1539136 00:08:52.516 21:12:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.516 21:12:27 -- common/autotest_common.sh@819 -- # '[' -z 1539136 ']' 00:08:52.516 21:12:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.516 21:12:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.516 21:12:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.516 21:12:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.516 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:08:52.516 [2024-07-26 21:12:27.178676] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:52.516 [2024-07-26 21:12:27.178728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.516 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.516 [2024-07-26 21:12:27.262932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.516 [2024-07-26 21:12:27.301482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.516 [2024-07-26 21:12:27.301594] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.516 [2024-07-26 21:12:27.301605] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.516 [2024-07-26 21:12:27.301613] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.516 [2024-07-26 21:12:27.301673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.517 [2024-07-26 21:12:27.301769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.517 [2024-07-26 21:12:27.301853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.517 [2024-07-26 21:12:27.301854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.454 21:12:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:53.454 21:12:27 -- common/autotest_common.sh@852 -- # return 0 00:08:53.454 21:12:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:53.454 21:12:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:53.454 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:08:53.454 21:12:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.454 21:12:28 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:53.454 21:12:28 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:53.454 21:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.454 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.454 [2024-07-26 21:12:28.032013] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:53.454 [2024-07-26 21:12:28.054949] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24b3ec0/0x24b83b0) succeed. 00:08:53.454 [2024-07-26 21:12:28.065960] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24b54b0/0x24f9a40) succeed. 00:08:53.454 21:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.454 21:12:28 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:53.454 21:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.454 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.454 Malloc1 00:08:53.454 21:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.454 21:12:28 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.454 21:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.454 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.454 21:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.454 21:12:28 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.454 21:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.454 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.454 21:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.454 21:12:28 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:53.454 21:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.454 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.454 [2024-07-26 21:12:28.307311] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:53.454 21:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.454 21:12:28 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:53.454 21:12:28 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:53.454 21:12:28 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:53.454 21:12:28 -- common/autotest_common.sh@1359 -- # local bs 00:08:53.454 21:12:28 -- common/autotest_common.sh@1360 -- # local nb 00:08:53.454 21:12:28 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:53.454 21:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.454 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:53.713 21:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.713 21:12:28 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:53.713 { 00:08:53.713 "name": "Malloc1", 00:08:53.713 "aliases": [ 00:08:53.713 "a3914980-9f38-4c78-bc7b-2431297ad0dd" 00:08:53.713 ], 00:08:53.713 "product_name": "Malloc disk", 00:08:53.713 "block_size": 512, 00:08:53.713 "num_blocks": 1048576, 00:08:53.713 "uuid": "a3914980-9f38-4c78-bc7b-2431297ad0dd", 00:08:53.713 "assigned_rate_limits": { 00:08:53.713 "rw_ios_per_sec": 0, 00:08:53.713 "rw_mbytes_per_sec": 0, 00:08:53.713 "r_mbytes_per_sec": 0, 00:08:53.713 "w_mbytes_per_sec": 0 00:08:53.713 }, 00:08:53.713 "claimed": true, 00:08:53.713 "claim_type": "exclusive_write", 00:08:53.713 "zoned": false, 00:08:53.713 "supported_io_types": { 00:08:53.713 "read": true, 00:08:53.713 "write": true, 00:08:53.713 "unmap": true, 00:08:53.713 "write_zeroes": true, 00:08:53.713 "flush": true, 00:08:53.713 "reset": true, 00:08:53.713 "compare": false, 00:08:53.713 "compare_and_write": false, 00:08:53.713 "abort": true, 00:08:53.713 "nvme_admin": false, 00:08:53.713 "nvme_io": false 00:08:53.713 }, 00:08:53.713 "memory_domains": [ 00:08:53.713 { 00:08:53.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.713 "dma_device_type": 2 00:08:53.713 } 00:08:53.713 ], 00:08:53.713 "driver_specific": {} 00:08:53.713 } 00:08:53.713 ]' 00:08:53.713 21:12:28 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:53.713 21:12:28 -- common/autotest_common.sh@1362 -- # bs=512 00:08:53.713 21:12:28 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:53.713 21:12:28 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:53.713 21:12:28 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:53.713 21:12:28 -- common/autotest_common.sh@1367 -- # echo 512 00:08:53.713 21:12:28 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:53.713 21:12:28 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:54.649 21:12:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.649 21:12:29 -- common/autotest_common.sh@1177 -- # local i=0 00:08:54.649 21:12:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.649 21:12:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:54.649 21:12:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:57.234 21:12:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:57.234 21:12:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:57.234 21:12:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.234 21:12:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:57.234 21:12:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.234 21:12:31 -- common/autotest_common.sh@1187 -- # return 0 00:08:57.234 21:12:31 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:57.234 21:12:31 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:57.234 21:12:31 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:57.234 21:12:31 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:57.234 21:12:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:57.234 21:12:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:57.234 21:12:31 -- setup/common.sh@80 -- # echo 536870912 00:08:57.234 21:12:31 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:57.234 21:12:31 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:57.234 21:12:31 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:57.234 21:12:31 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:57.234 21:12:31 -- target/filesystem.sh@69 -- # partprobe 00:08:57.234 21:12:31 -- target/filesystem.sh@70 -- # sleep 1 00:08:58.172 21:12:32 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:58.172 21:12:32 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:58.172 21:12:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:58.172 21:12:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.172 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:58.172 ************************************ 00:08:58.172 START TEST filesystem_ext4 00:08:58.172 ************************************ 00:08:58.172 21:12:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:58.172 21:12:32 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:58.172 21:12:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:58.172 21:12:32 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:58.172 21:12:32 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:58.172 21:12:32 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:58.172 21:12:32 -- common/autotest_common.sh@904 -- # local i=0 00:08:58.172 21:12:32 -- common/autotest_common.sh@905 -- # local force 00:08:58.172 21:12:32 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:58.172 21:12:32 -- common/autotest_common.sh@908 -- # force=-F 00:08:58.172 21:12:32 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:58.172 mke2fs 1.46.5 (30-Dec-2021) 00:08:58.172 Discarding device blocks: 0/522240 done 00:08:58.172 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:58.172 Filesystem UUID: f964ad48-eece-4be5-9ca3-ec818e3b2aec 00:08:58.172 Superblock backups stored on blocks: 00:08:58.172 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:58.172 00:08:58.172 Allocating group tables: 0/64 done 00:08:58.172 Writing inode tables: 0/64 done 00:08:58.172 Creating journal (8192 blocks): done 00:08:58.172 Writing superblocks and filesystem accounting information: 0/64 done 00:08:58.172 00:08:58.172 21:12:32 -- common/autotest_common.sh@921 -- # return 0 00:08:58.172 21:12:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.172 21:12:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.172 21:12:32 -- target/filesystem.sh@25 -- # sync 00:08:58.172 21:12:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.172 21:12:32 -- target/filesystem.sh@27 -- # sync 00:08:58.172 21:12:32 -- target/filesystem.sh@29 -- # i=0 00:08:58.172 21:12:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.172 21:12:32 -- target/filesystem.sh@37 -- # kill -0 1539136 00:08:58.172 21:12:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.172 21:12:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.172 21:12:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.172 21:12:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.172 00:08:58.172 real 0m0.191s 00:08:58.172 user 0m0.028s 00:08:58.172 sys 0m0.078s 00:08:58.172 21:12:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.172 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:58.172 ************************************ 00:08:58.172 END TEST filesystem_ext4 00:08:58.172 ************************************ 00:08:58.172 21:12:33 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:58.172 21:12:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:58.172 21:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.431 21:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:58.431 ************************************ 00:08:58.431 START TEST filesystem_btrfs 00:08:58.431 ************************************ 00:08:58.431 21:12:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:58.431 21:12:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:58.431 21:12:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:58.431 21:12:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:58.431 21:12:33 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:58.431 21:12:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:58.431 21:12:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:58.431 21:12:33 -- common/autotest_common.sh@905 -- # local force 00:08:58.431 21:12:33 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:58.431 21:12:33 -- common/autotest_common.sh@910 -- # force=-f 00:08:58.431 21:12:33 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:58.431 btrfs-progs v6.6.2 00:08:58.431 See https://btrfs.readthedocs.io for more information. 00:08:58.431 00:08:58.431 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:58.431 NOTE: several default settings have changed in version 5.15, please make sure 00:08:58.431 this does not affect your deployments: 00:08:58.431 - DUP for metadata (-m dup) 00:08:58.431 - enabled no-holes (-O no-holes) 00:08:58.431 - enabled free-space-tree (-R free-space-tree) 00:08:58.431 00:08:58.431 Label: (null) 00:08:58.431 UUID: 4a047f96-9530-4114-a1a1-cf990bf39ab7 00:08:58.431 Node size: 16384 00:08:58.431 Sector size: 4096 00:08:58.431 Filesystem size: 510.00MiB 00:08:58.431 Block group profiles: 00:08:58.431 Data: single 8.00MiB 00:08:58.431 Metadata: DUP 32.00MiB 00:08:58.431 System: DUP 8.00MiB 00:08:58.431 SSD detected: yes 00:08:58.431 Zoned device: no 00:08:58.431 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:58.431 Runtime features: free-space-tree 00:08:58.431 Checksum: crc32c 00:08:58.431 Number of devices: 1 00:08:58.431 Devices: 00:08:58.431 ID SIZE PATH 00:08:58.431 1 510.00MiB /dev/nvme0n1p1 00:08:58.431 00:08:58.431 21:12:33 -- common/autotest_common.sh@921 -- # return 0 00:08:58.431 21:12:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.431 21:12:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.431 21:12:33 -- target/filesystem.sh@25 -- # sync 00:08:58.431 21:12:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.431 21:12:33 -- target/filesystem.sh@27 -- # sync 00:08:58.431 21:12:33 -- target/filesystem.sh@29 -- # i=0 00:08:58.431 21:12:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.431 21:12:33 -- target/filesystem.sh@37 -- # kill -0 1539136 00:08:58.431 21:12:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.431 21:12:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.690 21:12:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.690 21:12:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.690 00:08:58.690 real 0m0.265s 00:08:58.690 user 0m0.037s 00:08:58.690 sys 0m0.137s 00:08:58.690 21:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.690 21:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:58.690 ************************************ 00:08:58.690 END TEST filesystem_btrfs 00:08:58.690 ************************************ 00:08:58.690 21:12:33 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:58.690 21:12:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:58.690 21:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.690 21:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:58.690 ************************************ 00:08:58.690 START TEST filesystem_xfs 00:08:58.690 ************************************ 00:08:58.690 21:12:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:58.690 21:12:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:58.690 21:12:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:58.690 21:12:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:58.690 21:12:33 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:58.690 21:12:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:58.690 21:12:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:58.690 21:12:33 -- common/autotest_common.sh@905 -- # local force 00:08:58.690 21:12:33 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:58.690 21:12:33 -- common/autotest_common.sh@910 -- # force=-f 00:08:58.690 21:12:33 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:58.690 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:58.690 = sectsz=512 attr=2, projid32bit=1 00:08:58.690 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:58.690 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:58.690 data = bsize=4096 blocks=130560, imaxpct=25 00:08:58.690 = sunit=0 swidth=0 blks 00:08:58.690 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:58.690 log =internal log bsize=4096 blocks=16384, version=2 00:08:58.690 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:58.690 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:58.690 Discarding blocks...Done. 00:08:58.690 21:12:33 -- common/autotest_common.sh@921 -- # return 0 00:08:58.690 21:12:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.690 21:12:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.690 21:12:33 -- target/filesystem.sh@25 -- # sync 00:08:58.690 21:12:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.690 21:12:33 -- target/filesystem.sh@27 -- # sync 00:08:58.690 21:12:33 -- target/filesystem.sh@29 -- # i=0 00:08:58.690 21:12:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.690 21:12:33 -- target/filesystem.sh@37 -- # kill -0 1539136 00:08:58.690 21:12:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.690 21:12:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.949 21:12:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.949 21:12:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.949 00:08:58.949 real 0m0.215s 00:08:58.949 user 0m0.022s 00:08:58.949 sys 0m0.090s 00:08:58.949 21:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.949 21:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 ************************************ 00:08:58.949 END TEST filesystem_xfs 00:08:58.949 ************************************ 00:08:58.949 21:12:33 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:58.949 21:12:33 -- target/filesystem.sh@93 -- # sync 00:08:58.949 21:12:33 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.886 21:12:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.886 21:12:34 -- common/autotest_common.sh@1198 -- # local i=0 00:08:59.886 21:12:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:59.886 21:12:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.886 21:12:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.886 21:12:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:59.886 21:12:34 -- common/autotest_common.sh@1210 -- # return 0 00:08:59.886 21:12:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.886 21:12:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.886 21:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 21:12:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.886 21:12:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:59.886 21:12:34 -- target/filesystem.sh@101 -- # killprocess 1539136 00:08:59.886 21:12:34 -- common/autotest_common.sh@926 -- # '[' -z 1539136 ']' 00:08:59.886 21:12:34 -- common/autotest_common.sh@930 -- # kill -0 1539136 00:08:59.886 21:12:34 -- common/autotest_common.sh@931 -- # uname 00:08:59.886 21:12:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:59.886 21:12:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1539136 00:08:59.886 21:12:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.886 21:12:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.886 21:12:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1539136' 00:08:59.886 killing process with pid 1539136 00:08:59.886 21:12:34 -- common/autotest_common.sh@945 -- # kill 1539136 00:08:59.886 21:12:34 -- common/autotest_common.sh@950 -- # wait 1539136 00:09:00.454 21:12:35 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:00.454 00:09:00.454 real 0m7.979s 00:09:00.454 user 0m31.194s 00:09:00.454 sys 0m1.205s 00:09:00.454 21:12:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.454 21:12:35 -- common/autotest_common.sh@10 -- # set +x 00:09:00.454 ************************************ 00:09:00.454 END TEST nvmf_filesystem_no_in_capsule 00:09:00.454 ************************************ 00:09:00.454 21:12:35 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:00.454 21:12:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:00.454 21:12:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.454 21:12:35 -- common/autotest_common.sh@10 -- # set +x 00:09:00.454 ************************************ 00:09:00.454 START TEST nvmf_filesystem_in_capsule 00:09:00.454 ************************************ 00:09:00.454 21:12:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:09:00.454 21:12:35 -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:00.454 21:12:35 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:00.454 21:12:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:00.454 21:12:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:00.454 21:12:35 -- common/autotest_common.sh@10 -- # set +x 00:09:00.454 21:12:35 -- nvmf/common.sh@469 -- # nvmfpid=1540701 00:09:00.454 21:12:35 -- nvmf/common.sh@470 -- # waitforlisten 1540701 00:09:00.454 21:12:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.454 21:12:35 -- common/autotest_common.sh@819 -- # '[' -z 1540701 ']' 00:09:00.454 21:12:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.454 21:12:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:00.454 21:12:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.454 21:12:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:00.454 21:12:35 -- common/autotest_common.sh@10 -- # set +x 00:09:00.454 [2024-07-26 21:12:35.210281] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:00.454 [2024-07-26 21:12:35.210335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.454 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.454 [2024-07-26 21:12:35.297123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.712 [2024-07-26 21:12:35.335382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.712 [2024-07-26 21:12:35.335492] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.712 [2024-07-26 21:12:35.335502] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.712 [2024-07-26 21:12:35.335512] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.712 [2024-07-26 21:12:35.335563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.712 [2024-07-26 21:12:35.335659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.712 [2024-07-26 21:12:35.335692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.712 [2024-07-26 21:12:35.335694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.279 21:12:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:01.279 21:12:36 -- common/autotest_common.sh@852 -- # return 0 00:09:01.279 21:12:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:01.279 21:12:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:01.279 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.279 21:12:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.279 21:12:36 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:01.279 21:12:36 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:01.279 21:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.279 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.279 [2024-07-26 21:12:36.083847] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d3dec0/0x1d423b0) succeed. 00:09:01.279 [2024-07-26 21:12:36.094050] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d3f4b0/0x1d83a40) succeed. 00:09:01.537 21:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.537 21:12:36 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:01.537 21:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.537 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.537 Malloc1 00:09:01.537 21:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.537 21:12:36 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.537 21:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.537 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.537 21:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.537 21:12:36 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:01.537 21:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.537 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.537 21:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.537 21:12:36 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:01.537 21:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.537 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.537 [2024-07-26 21:12:36.362987] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:01.537 21:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.537 21:12:36 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:01.537 21:12:36 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:09:01.537 21:12:36 -- common/autotest_common.sh@1358 -- # local bdev_info 00:09:01.537 21:12:36 -- common/autotest_common.sh@1359 -- # local bs 00:09:01.537 21:12:36 -- common/autotest_common.sh@1360 -- # local nb 00:09:01.537 21:12:36 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:01.537 21:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.537 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.537 21:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.537 21:12:36 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:09:01.537 { 00:09:01.537 "name": "Malloc1", 00:09:01.537 "aliases": [ 00:09:01.537 "3a6cf8f6-616f-4d1b-8aa2-ecb04bb47016" 00:09:01.537 ], 00:09:01.537 "product_name": "Malloc disk", 00:09:01.537 "block_size": 512, 00:09:01.537 "num_blocks": 1048576, 00:09:01.537 "uuid": "3a6cf8f6-616f-4d1b-8aa2-ecb04bb47016", 00:09:01.537 "assigned_rate_limits": { 00:09:01.537 "rw_ios_per_sec": 0, 00:09:01.537 "rw_mbytes_per_sec": 0, 00:09:01.537 "r_mbytes_per_sec": 0, 00:09:01.537 "w_mbytes_per_sec": 0 00:09:01.537 }, 00:09:01.537 "claimed": true, 00:09:01.537 "claim_type": "exclusive_write", 00:09:01.537 "zoned": false, 00:09:01.537 "supported_io_types": { 00:09:01.537 "read": true, 00:09:01.537 "write": true, 00:09:01.537 "unmap": true, 00:09:01.537 "write_zeroes": true, 00:09:01.537 "flush": true, 00:09:01.537 "reset": true, 00:09:01.537 "compare": false, 00:09:01.537 "compare_and_write": false, 00:09:01.537 "abort": true, 00:09:01.537 "nvme_admin": false, 00:09:01.537 "nvme_io": false 00:09:01.537 }, 00:09:01.537 "memory_domains": [ 00:09:01.537 { 00:09:01.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.537 "dma_device_type": 2 00:09:01.537 } 00:09:01.537 ], 00:09:01.537 "driver_specific": {} 00:09:01.537 } 00:09:01.537 ]' 00:09:01.537 21:12:36 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:09:01.795 21:12:36 -- common/autotest_common.sh@1362 -- # bs=512 00:09:01.795 21:12:36 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:09:01.795 21:12:36 -- common/autotest_common.sh@1363 -- # nb=1048576 00:09:01.795 21:12:36 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:09:01.795 21:12:36 -- common/autotest_common.sh@1367 -- # echo 512 00:09:01.795 21:12:36 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:01.795 21:12:36 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:02.729 21:12:37 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.729 21:12:37 -- common/autotest_common.sh@1177 -- # local i=0 00:09:02.729 21:12:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.729 21:12:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:02.729 21:12:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:04.631 21:12:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:04.631 21:12:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:04.631 21:12:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.632 21:12:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:04.632 21:12:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.632 21:12:39 -- common/autotest_common.sh@1187 -- # return 0 00:09:04.632 21:12:39 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:04.632 21:12:39 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:04.632 21:12:39 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:04.632 21:12:39 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:04.632 21:12:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:04.632 21:12:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:04.632 21:12:39 -- setup/common.sh@80 -- # echo 536870912 00:09:04.632 21:12:39 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:04.632 21:12:39 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:04.632 21:12:39 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:04.632 21:12:39 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:04.890 21:12:39 -- target/filesystem.sh@69 -- # partprobe 00:09:05.150 21:12:39 -- target/filesystem.sh@70 -- # sleep 1 00:09:06.086 21:12:40 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:06.086 21:12:40 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:06.086 21:12:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:06.086 21:12:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.086 21:12:40 -- common/autotest_common.sh@10 -- # set +x 00:09:06.086 ************************************ 00:09:06.086 START TEST filesystem_in_capsule_ext4 00:09:06.086 ************************************ 00:09:06.086 21:12:40 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:06.086 21:12:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:06.086 21:12:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:06.086 21:12:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:06.086 21:12:40 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:09:06.086 21:12:40 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:06.086 21:12:40 -- common/autotest_common.sh@904 -- # local i=0 00:09:06.086 21:12:40 -- common/autotest_common.sh@905 -- # local force 00:09:06.086 21:12:40 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:09:06.086 21:12:40 -- common/autotest_common.sh@908 -- # force=-F 00:09:06.086 21:12:40 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:06.086 mke2fs 1.46.5 (30-Dec-2021) 00:09:06.086 Discarding device blocks: 0/522240 done 00:09:06.086 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:06.086 Filesystem UUID: 04fbdae7-fc55-49cb-b869-1259d2fb7c80 00:09:06.086 Superblock backups stored on blocks: 00:09:06.086 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:06.086 00:09:06.086 Allocating group tables: 0/64 done 00:09:06.086 Writing inode tables: 0/64 done 00:09:06.086 Creating journal (8192 blocks): done 00:09:06.086 Writing superblocks and filesystem accounting information: 0/64 done 00:09:06.086 00:09:06.086 21:12:40 -- common/autotest_common.sh@921 -- # return 0 00:09:06.086 21:12:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:06.086 21:12:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:06.086 21:12:40 -- target/filesystem.sh@25 -- # sync 00:09:06.086 21:12:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:06.086 21:12:40 -- target/filesystem.sh@27 -- # sync 00:09:06.086 21:12:40 -- target/filesystem.sh@29 -- # i=0 00:09:06.086 21:12:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:06.344 21:12:40 -- target/filesystem.sh@37 -- # kill -0 1540701 00:09:06.344 21:12:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:06.345 21:12:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:06.345 21:12:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:06.345 21:12:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:06.345 00:09:06.345 real 0m0.184s 00:09:06.345 user 0m0.033s 00:09:06.345 sys 0m0.066s 00:09:06.345 21:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.345 21:12:40 -- common/autotest_common.sh@10 -- # set +x 00:09:06.345 ************************************ 00:09:06.345 END TEST filesystem_in_capsule_ext4 00:09:06.345 ************************************ 00:09:06.345 21:12:41 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:06.345 21:12:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:06.345 21:12:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.345 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:09:06.345 ************************************ 00:09:06.345 START TEST filesystem_in_capsule_btrfs 00:09:06.345 ************************************ 00:09:06.345 21:12:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:06.345 21:12:41 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:06.345 21:12:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:06.345 21:12:41 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:06.345 21:12:41 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:09:06.345 21:12:41 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:06.345 21:12:41 -- common/autotest_common.sh@904 -- # local i=0 00:09:06.345 21:12:41 -- common/autotest_common.sh@905 -- # local force 00:09:06.345 21:12:41 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:09:06.345 21:12:41 -- common/autotest_common.sh@910 -- # force=-f 00:09:06.345 21:12:41 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:06.345 btrfs-progs v6.6.2 00:09:06.345 See https://btrfs.readthedocs.io for more information. 00:09:06.345 00:09:06.345 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:06.345 NOTE: several default settings have changed in version 5.15, please make sure 00:09:06.345 this does not affect your deployments: 00:09:06.345 - DUP for metadata (-m dup) 00:09:06.345 - enabled no-holes (-O no-holes) 00:09:06.345 - enabled free-space-tree (-R free-space-tree) 00:09:06.345 00:09:06.345 Label: (null) 00:09:06.345 UUID: 9c3684ab-1c8b-4142-972b-270eb93337e8 00:09:06.345 Node size: 16384 00:09:06.345 Sector size: 4096 00:09:06.345 Filesystem size: 510.00MiB 00:09:06.345 Block group profiles: 00:09:06.345 Data: single 8.00MiB 00:09:06.345 Metadata: DUP 32.00MiB 00:09:06.345 System: DUP 8.00MiB 00:09:06.345 SSD detected: yes 00:09:06.345 Zoned device: no 00:09:06.345 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:06.345 Runtime features: free-space-tree 00:09:06.345 Checksum: crc32c 00:09:06.345 Number of devices: 1 00:09:06.345 Devices: 00:09:06.345 ID SIZE PATH 00:09:06.345 1 510.00MiB /dev/nvme0n1p1 00:09:06.345 00:09:06.345 21:12:41 -- common/autotest_common.sh@921 -- # return 0 00:09:06.345 21:12:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:06.604 21:12:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:06.604 21:12:41 -- target/filesystem.sh@25 -- # sync 00:09:06.604 21:12:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:06.604 21:12:41 -- target/filesystem.sh@27 -- # sync 00:09:06.604 21:12:41 -- target/filesystem.sh@29 -- # i=0 00:09:06.604 21:12:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:06.604 21:12:41 -- target/filesystem.sh@37 -- # kill -0 1540701 00:09:06.604 21:12:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:06.604 21:12:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:06.604 21:12:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:06.604 21:12:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:06.604 00:09:06.604 real 0m0.261s 00:09:06.604 user 0m0.029s 00:09:06.604 sys 0m0.140s 00:09:06.604 21:12:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.604 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:09:06.604 ************************************ 00:09:06.604 END TEST filesystem_in_capsule_btrfs 00:09:06.604 ************************************ 00:09:06.604 21:12:41 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:06.604 21:12:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:06.604 21:12:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.604 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:09:06.604 ************************************ 00:09:06.604 START TEST filesystem_in_capsule_xfs 00:09:06.604 ************************************ 00:09:06.604 21:12:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:09:06.604 21:12:41 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:06.604 21:12:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:06.604 21:12:41 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:06.604 21:12:41 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:09:06.604 21:12:41 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:06.604 21:12:41 -- common/autotest_common.sh@904 -- # local i=0 00:09:06.604 21:12:41 -- common/autotest_common.sh@905 -- # local force 00:09:06.604 21:12:41 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:09:06.604 21:12:41 -- common/autotest_common.sh@910 -- # force=-f 00:09:06.604 21:12:41 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:06.604 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:06.604 = sectsz=512 attr=2, projid32bit=1 00:09:06.604 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:06.604 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:06.604 data = bsize=4096 blocks=130560, imaxpct=25 00:09:06.604 = sunit=0 swidth=0 blks 00:09:06.604 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:06.604 log =internal log bsize=4096 blocks=16384, version=2 00:09:06.604 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:06.604 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:06.604 Discarding blocks...Done. 00:09:06.604 21:12:41 -- common/autotest_common.sh@921 -- # return 0 00:09:06.604 21:12:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:06.863 21:12:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:06.863 21:12:41 -- target/filesystem.sh@25 -- # sync 00:09:06.863 21:12:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:06.863 21:12:41 -- target/filesystem.sh@27 -- # sync 00:09:06.863 21:12:41 -- target/filesystem.sh@29 -- # i=0 00:09:06.863 21:12:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:06.863 21:12:41 -- target/filesystem.sh@37 -- # kill -0 1540701 00:09:06.863 21:12:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:06.863 21:12:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:06.863 21:12:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:06.863 21:12:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:06.863 00:09:06.863 real 0m0.202s 00:09:06.863 user 0m0.024s 00:09:06.863 sys 0m0.084s 00:09:06.863 21:12:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.863 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:09:06.863 ************************************ 00:09:06.863 END TEST filesystem_in_capsule_xfs 00:09:06.863 ************************************ 00:09:06.863 21:12:41 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:06.863 21:12:41 -- target/filesystem.sh@93 -- # sync 00:09:06.863 21:12:41 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.797 21:12:42 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.797 21:12:42 -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.797 21:12:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:09:07.797 21:12:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.797 21:12:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:07.798 21:12:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.798 21:12:42 -- common/autotest_common.sh@1210 -- # return 0 00:09:07.798 21:12:42 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.798 21:12:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:07.798 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:09:07.798 21:12:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:07.798 21:12:42 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:07.798 21:12:42 -- target/filesystem.sh@101 -- # killprocess 1540701 00:09:07.798 21:12:42 -- common/autotest_common.sh@926 -- # '[' -z 1540701 ']' 00:09:07.798 21:12:42 -- common/autotest_common.sh@930 -- # kill -0 1540701 00:09:07.798 21:12:42 -- common/autotest_common.sh@931 -- # uname 00:09:07.798 21:12:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:07.798 21:12:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1540701 00:09:08.057 21:12:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:08.057 21:12:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:08.057 21:12:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1540701' 00:09:08.057 killing process with pid 1540701 00:09:08.057 21:12:42 -- common/autotest_common.sh@945 -- # kill 1540701 00:09:08.057 21:12:42 -- common/autotest_common.sh@950 -- # wait 1540701 00:09:08.316 21:12:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:08.316 00:09:08.316 real 0m7.933s 00:09:08.316 user 0m30.907s 00:09:08.316 sys 0m1.209s 00:09:08.316 21:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.316 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:09:08.316 ************************************ 00:09:08.316 END TEST nvmf_filesystem_in_capsule 00:09:08.316 ************************************ 00:09:08.316 21:12:43 -- target/filesystem.sh@108 -- # nvmftestfini 00:09:08.316 21:12:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:08.316 21:12:43 -- nvmf/common.sh@116 -- # sync 00:09:08.316 21:12:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:08.316 21:12:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:08.316 21:12:43 -- nvmf/common.sh@119 -- # set +e 00:09:08.316 21:12:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:08.316 21:12:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:08.316 rmmod nvme_rdma 00:09:08.316 rmmod nvme_fabrics 00:09:08.316 21:12:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:08.316 21:12:43 -- nvmf/common.sh@123 -- # set -e 00:09:08.316 21:12:43 -- nvmf/common.sh@124 -- # return 0 00:09:08.316 21:12:43 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:09:08.316 21:12:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:08.316 21:12:43 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:08.316 00:09:08.316 real 0m24.716s 00:09:08.316 user 1m4.570s 00:09:08.316 sys 0m8.991s 00:09:08.316 21:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.316 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:09:08.316 ************************************ 00:09:08.316 END TEST nvmf_filesystem 00:09:08.316 ************************************ 00:09:08.574 21:12:43 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:08.574 21:12:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:08.574 21:12:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.574 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:09:08.574 ************************************ 00:09:08.574 START TEST nvmf_discovery 00:09:08.574 ************************************ 00:09:08.574 21:12:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:08.574 * Looking for test storage... 00:09:08.574 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:08.574 21:12:43 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.574 21:12:43 -- nvmf/common.sh@7 -- # uname -s 00:09:08.574 21:12:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.574 21:12:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.574 21:12:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.574 21:12:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.574 21:12:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.574 21:12:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.574 21:12:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.574 21:12:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.574 21:12:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.574 21:12:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.574 21:12:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:08.574 21:12:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:08.574 21:12:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.574 21:12:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.574 21:12:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.574 21:12:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:08.574 21:12:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.574 21:12:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.574 21:12:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.574 21:12:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.574 21:12:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.574 21:12:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.574 21:12:43 -- paths/export.sh@5 -- # export PATH 00:09:08.574 21:12:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.574 21:12:43 -- nvmf/common.sh@46 -- # : 0 00:09:08.574 21:12:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:08.574 21:12:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:08.574 21:12:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:08.574 21:12:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.574 21:12:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.574 21:12:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:08.574 21:12:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:08.574 21:12:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:08.574 21:12:43 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:08.574 21:12:43 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:08.574 21:12:43 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:08.574 21:12:43 -- target/discovery.sh@15 -- # hash nvme 00:09:08.574 21:12:43 -- target/discovery.sh@20 -- # nvmftestinit 00:09:08.574 21:12:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:08.574 21:12:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.574 21:12:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:08.574 21:12:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:08.574 21:12:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:08.574 21:12:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.574 21:12:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.574 21:12:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.574 21:12:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:08.574 21:12:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:08.574 21:12:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:08.574 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:09:16.694 21:12:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:16.694 21:12:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:16.694 21:12:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:16.694 21:12:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:16.694 21:12:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:16.694 21:12:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:16.694 21:12:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:16.694 21:12:51 -- nvmf/common.sh@294 -- # net_devs=() 00:09:16.694 21:12:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:16.694 21:12:51 -- nvmf/common.sh@295 -- # e810=() 00:09:16.694 21:12:51 -- nvmf/common.sh@295 -- # local -ga e810 00:09:16.694 21:12:51 -- nvmf/common.sh@296 -- # x722=() 00:09:16.694 21:12:51 -- nvmf/common.sh@296 -- # local -ga x722 00:09:16.694 21:12:51 -- nvmf/common.sh@297 -- # mlx=() 00:09:16.694 21:12:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:16.694 21:12:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.694 21:12:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:16.694 21:12:51 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:16.694 21:12:51 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:16.694 21:12:51 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:16.694 21:12:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:16.694 21:12:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:16.694 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:16.694 21:12:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.694 21:12:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:16.694 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:16.694 21:12:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.694 21:12:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:16.694 21:12:51 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.694 21:12:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:16.694 21:12:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.694 21:12:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:16.694 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:16.694 21:12:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.694 21:12:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.694 21:12:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:16.694 21:12:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.694 21:12:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:16.694 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:16.694 21:12:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.694 21:12:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:16.694 21:12:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:16.694 21:12:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:16.694 21:12:51 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:16.694 21:12:51 -- nvmf/common.sh@57 -- # uname 00:09:16.694 21:12:51 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:16.694 21:12:51 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:16.694 21:12:51 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:16.694 21:12:51 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:16.694 21:12:51 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:16.694 21:12:51 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:16.694 21:12:51 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:16.694 21:12:51 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:16.694 21:12:51 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:16.694 21:12:51 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:16.694 21:12:51 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:16.694 21:12:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.694 21:12:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:16.694 21:12:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:16.694 21:12:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.694 21:12:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:16.694 21:12:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:16.694 21:12:51 -- nvmf/common.sh@104 -- # continue 2 00:09:16.694 21:12:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.694 21:12:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:16.694 21:12:51 -- nvmf/common.sh@104 -- # continue 2 00:09:16.694 21:12:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:16.694 21:12:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:16.694 21:12:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:16.694 21:12:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:16.694 21:12:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.694 21:12:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.694 21:12:51 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:16.694 21:12:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:16.694 21:12:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:16.694 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.694 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:16.694 altname enp217s0f0np0 00:09:16.694 altname ens818f0np0 00:09:16.694 inet 192.168.100.8/24 scope global mlx_0_0 00:09:16.694 valid_lft forever preferred_lft forever 00:09:16.694 21:12:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:16.694 21:12:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:16.694 21:12:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:16.694 21:12:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:16.694 21:12:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.694 21:12:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.694 21:12:51 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:16.695 21:12:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:16.695 21:12:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:16.695 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.695 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:16.695 altname enp217s0f1np1 00:09:16.695 altname ens818f1np1 00:09:16.695 inet 192.168.100.9/24 scope global mlx_0_1 00:09:16.695 valid_lft forever preferred_lft forever 00:09:16.695 21:12:51 -- nvmf/common.sh@410 -- # return 0 00:09:16.695 21:12:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:16.695 21:12:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:16.695 21:12:51 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:16.695 21:12:51 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:16.695 21:12:51 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:16.695 21:12:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.695 21:12:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:16.695 21:12:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:16.695 21:12:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.695 21:12:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:16.695 21:12:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.695 21:12:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.695 21:12:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.695 21:12:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:16.695 21:12:51 -- nvmf/common.sh@104 -- # continue 2 00:09:16.695 21:12:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.695 21:12:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.695 21:12:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.695 21:12:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.695 21:12:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.695 21:12:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:16.695 21:12:51 -- nvmf/common.sh@104 -- # continue 2 00:09:16.695 21:12:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:16.695 21:12:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:16.695 21:12:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:16.695 21:12:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:16.695 21:12:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.695 21:12:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.695 21:12:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:16.695 21:12:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:16.695 21:12:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:16.695 21:12:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:16.695 21:12:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.695 21:12:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.695 21:12:51 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:16.695 192.168.100.9' 00:09:16.695 21:12:51 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:16.695 192.168.100.9' 00:09:16.695 21:12:51 -- nvmf/common.sh@445 -- # head -n 1 00:09:16.695 21:12:51 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:16.695 21:12:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:16.695 192.168.100.9' 00:09:16.695 21:12:51 -- nvmf/common.sh@446 -- # tail -n +2 00:09:16.695 21:12:51 -- nvmf/common.sh@446 -- # head -n 1 00:09:16.695 21:12:51 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:16.695 21:12:51 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:16.695 21:12:51 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:16.695 21:12:51 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:16.695 21:12:51 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:16.695 21:12:51 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:16.695 21:12:51 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:16.695 21:12:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:16.695 21:12:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:16.695 21:12:51 -- common/autotest_common.sh@10 -- # set +x 00:09:16.695 21:12:51 -- nvmf/common.sh@469 -- # nvmfpid=1546344 00:09:16.695 21:12:51 -- nvmf/common.sh@470 -- # waitforlisten 1546344 00:09:16.695 21:12:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.695 21:12:51 -- common/autotest_common.sh@819 -- # '[' -z 1546344 ']' 00:09:16.695 21:12:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.695 21:12:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:16.695 21:12:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.695 21:12:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:16.695 21:12:51 -- common/autotest_common.sh@10 -- # set +x 00:09:16.695 [2024-07-26 21:12:51.351648] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:16.695 [2024-07-26 21:12:51.351723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.695 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.695 [2024-07-26 21:12:51.439600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.695 [2024-07-26 21:12:51.477150] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.695 [2024-07-26 21:12:51.477267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.695 [2024-07-26 21:12:51.477277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.695 [2024-07-26 21:12:51.477286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.695 [2024-07-26 21:12:51.477340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.695 [2024-07-26 21:12:51.477435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.695 [2024-07-26 21:12:51.477522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.695 [2024-07-26 21:12:51.477524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.632 21:12:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:17.632 21:12:52 -- common/autotest_common.sh@852 -- # return 0 00:09:17.632 21:12:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:17.632 21:12:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 21:12:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.632 21:12:52 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:17.632 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 [2024-07-26 21:12:52.219452] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11a5060/0x11a9550) succeed. 00:09:17.632 [2024-07-26 21:12:52.229811] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11a6650/0x11eabe0) succeed. 00:09:17.632 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.632 21:12:52 -- target/discovery.sh@26 -- # seq 1 4 00:09:17.632 21:12:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:17.632 21:12:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:17.632 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 Null1 00:09:17.632 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.632 21:12:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.632 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.632 21:12:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:17.632 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.632 21:12:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:17.632 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 [2024-07-26 21:12:52.394129] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:17.632 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.632 21:12:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:17.632 21:12:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:17.632 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 Null2 00:09:17.632 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.632 21:12:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:17.632 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.632 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:17.633 21:12:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 Null3 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:17.633 21:12:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 Null4 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.633 21:12:52 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:17.633 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.633 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.893 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.893 21:12:52 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:17.893 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.893 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.893 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.893 21:12:52 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:09:17.893 00:09:17.893 Discovery Log Number of Records 6, Generation counter 6 00:09:17.893 =====Discovery Log Entry 0====== 00:09:17.893 trtype: rdma 00:09:17.893 adrfam: ipv4 00:09:17.893 subtype: current discovery subsystem 00:09:17.893 treq: not required 00:09:17.893 portid: 0 00:09:17.893 trsvcid: 4420 00:09:17.893 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:17.893 traddr: 192.168.100.8 00:09:17.893 eflags: explicit discovery connections, duplicate discovery information 00:09:17.893 rdma_prtype: not specified 00:09:17.893 rdma_qptype: connected 00:09:17.893 rdma_cms: rdma-cm 00:09:17.893 rdma_pkey: 0x0000 00:09:17.893 =====Discovery Log Entry 1====== 00:09:17.893 trtype: rdma 00:09:17.893 adrfam: ipv4 00:09:17.893 subtype: nvme subsystem 00:09:17.893 treq: not required 00:09:17.893 portid: 0 00:09:17.893 trsvcid: 4420 00:09:17.893 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:17.893 traddr: 192.168.100.8 00:09:17.893 eflags: none 00:09:17.893 rdma_prtype: not specified 00:09:17.893 rdma_qptype: connected 00:09:17.893 rdma_cms: rdma-cm 00:09:17.893 rdma_pkey: 0x0000 00:09:17.893 =====Discovery Log Entry 2====== 00:09:17.893 trtype: rdma 00:09:17.893 adrfam: ipv4 00:09:17.893 subtype: nvme subsystem 00:09:17.893 treq: not required 00:09:17.893 portid: 0 00:09:17.893 trsvcid: 4420 00:09:17.893 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:17.893 traddr: 192.168.100.8 00:09:17.893 eflags: none 00:09:17.893 rdma_prtype: not specified 00:09:17.893 rdma_qptype: connected 00:09:17.893 rdma_cms: rdma-cm 00:09:17.893 rdma_pkey: 0x0000 00:09:17.893 =====Discovery Log Entry 3====== 00:09:17.893 trtype: rdma 00:09:17.893 adrfam: ipv4 00:09:17.893 subtype: nvme subsystem 00:09:17.893 treq: not required 00:09:17.893 portid: 0 00:09:17.893 trsvcid: 4420 00:09:17.893 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:17.893 traddr: 192.168.100.8 00:09:17.893 eflags: none 00:09:17.893 rdma_prtype: not specified 00:09:17.893 rdma_qptype: connected 00:09:17.893 rdma_cms: rdma-cm 00:09:17.893 rdma_pkey: 0x0000 00:09:17.893 =====Discovery Log Entry 4====== 00:09:17.893 trtype: rdma 00:09:17.893 adrfam: ipv4 00:09:17.893 subtype: nvme subsystem 00:09:17.893 treq: not required 00:09:17.893 portid: 0 00:09:17.893 trsvcid: 4420 00:09:17.893 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:17.893 traddr: 192.168.100.8 00:09:17.893 eflags: none 00:09:17.893 rdma_prtype: not specified 00:09:17.893 rdma_qptype: connected 00:09:17.893 rdma_cms: rdma-cm 00:09:17.893 rdma_pkey: 0x0000 00:09:17.893 =====Discovery Log Entry 5====== 00:09:17.893 trtype: rdma 00:09:17.893 adrfam: ipv4 00:09:17.893 subtype: discovery subsystem referral 00:09:17.893 treq: not required 00:09:17.893 portid: 0 00:09:17.893 trsvcid: 4430 00:09:17.893 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:17.893 traddr: 192.168.100.8 00:09:17.893 eflags: none 00:09:17.893 rdma_prtype: unrecognized 00:09:17.893 rdma_qptype: unrecognized 00:09:17.893 rdma_cms: unrecognized 00:09:17.893 rdma_pkey: 0x0000 00:09:17.893 21:12:52 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:17.893 Perform nvmf subsystem discovery via RPC 00:09:17.893 21:12:52 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:17.893 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.893 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.893 [2024-07-26 21:12:52.618616] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:17.893 [ 00:09:17.893 { 00:09:17.893 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:17.893 "subtype": "Discovery", 00:09:17.893 "listen_addresses": [ 00:09:17.893 { 00:09:17.893 "transport": "RDMA", 00:09:17.893 "trtype": "RDMA", 00:09:17.893 "adrfam": "IPv4", 00:09:17.893 "traddr": "192.168.100.8", 00:09:17.893 "trsvcid": "4420" 00:09:17.893 } 00:09:17.893 ], 00:09:17.893 "allow_any_host": true, 00:09:17.893 "hosts": [] 00:09:17.893 }, 00:09:17.893 { 00:09:17.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.893 "subtype": "NVMe", 00:09:17.893 "listen_addresses": [ 00:09:17.893 { 00:09:17.893 "transport": "RDMA", 00:09:17.893 "trtype": "RDMA", 00:09:17.893 "adrfam": "IPv4", 00:09:17.893 "traddr": "192.168.100.8", 00:09:17.893 "trsvcid": "4420" 00:09:17.893 } 00:09:17.893 ], 00:09:17.893 "allow_any_host": true, 00:09:17.893 "hosts": [], 00:09:17.893 "serial_number": "SPDK00000000000001", 00:09:17.893 "model_number": "SPDK bdev Controller", 00:09:17.893 "max_namespaces": 32, 00:09:17.893 "min_cntlid": 1, 00:09:17.894 "max_cntlid": 65519, 00:09:17.894 "namespaces": [ 00:09:17.894 { 00:09:17.894 "nsid": 1, 00:09:17.894 "bdev_name": "Null1", 00:09:17.894 "name": "Null1", 00:09:17.894 "nguid": "9D3FC9B784B8498D86BF92254FAB4C10", 00:09:17.894 "uuid": "9d3fc9b7-84b8-498d-86bf-92254fab4c10" 00:09:17.894 } 00:09:17.894 ] 00:09:17.894 }, 00:09:17.894 { 00:09:17.894 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:17.894 "subtype": "NVMe", 00:09:17.894 "listen_addresses": [ 00:09:17.894 { 00:09:17.894 "transport": "RDMA", 00:09:17.894 "trtype": "RDMA", 00:09:17.894 "adrfam": "IPv4", 00:09:17.894 "traddr": "192.168.100.8", 00:09:17.894 "trsvcid": "4420" 00:09:17.894 } 00:09:17.894 ], 00:09:17.894 "allow_any_host": true, 00:09:17.894 "hosts": [], 00:09:17.894 "serial_number": "SPDK00000000000002", 00:09:17.894 "model_number": "SPDK bdev Controller", 00:09:17.894 "max_namespaces": 32, 00:09:17.894 "min_cntlid": 1, 00:09:17.894 "max_cntlid": 65519, 00:09:17.894 "namespaces": [ 00:09:17.894 { 00:09:17.894 "nsid": 1, 00:09:17.894 "bdev_name": "Null2", 00:09:17.894 "name": "Null2", 00:09:17.894 "nguid": "C157AE065E9546E29B14C15D9E2F071D", 00:09:17.894 "uuid": "c157ae06-5e95-46e2-9b14-c15d9e2f071d" 00:09:17.894 } 00:09:17.894 ] 00:09:17.894 }, 00:09:17.894 { 00:09:17.894 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:17.894 "subtype": "NVMe", 00:09:17.894 "listen_addresses": [ 00:09:17.894 { 00:09:17.894 "transport": "RDMA", 00:09:17.894 "trtype": "RDMA", 00:09:17.894 "adrfam": "IPv4", 00:09:17.894 "traddr": "192.168.100.8", 00:09:17.894 "trsvcid": "4420" 00:09:17.894 } 00:09:17.894 ], 00:09:17.894 "allow_any_host": true, 00:09:17.894 "hosts": [], 00:09:17.894 "serial_number": "SPDK00000000000003", 00:09:17.894 "model_number": "SPDK bdev Controller", 00:09:17.894 "max_namespaces": 32, 00:09:17.894 "min_cntlid": 1, 00:09:17.894 "max_cntlid": 65519, 00:09:17.894 "namespaces": [ 00:09:17.894 { 00:09:17.894 "nsid": 1, 00:09:17.894 "bdev_name": "Null3", 00:09:17.894 "name": "Null3", 00:09:17.894 "nguid": "6391C5E2F95E4143ADE99D6C4156F81D", 00:09:17.894 "uuid": "6391c5e2-f95e-4143-ade9-9d6c4156f81d" 00:09:17.894 } 00:09:17.894 ] 00:09:17.894 }, 00:09:17.894 { 00:09:17.894 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:17.894 "subtype": "NVMe", 00:09:17.894 "listen_addresses": [ 00:09:17.894 { 00:09:17.894 "transport": "RDMA", 00:09:17.894 "trtype": "RDMA", 00:09:17.894 "adrfam": "IPv4", 00:09:17.894 "traddr": "192.168.100.8", 00:09:17.894 "trsvcid": "4420" 00:09:17.894 } 00:09:17.894 ], 00:09:17.894 "allow_any_host": true, 00:09:17.894 "hosts": [], 00:09:17.894 "serial_number": "SPDK00000000000004", 00:09:17.894 "model_number": "SPDK bdev Controller", 00:09:17.894 "max_namespaces": 32, 00:09:17.894 "min_cntlid": 1, 00:09:17.894 "max_cntlid": 65519, 00:09:17.894 "namespaces": [ 00:09:17.894 { 00:09:17.894 "nsid": 1, 00:09:17.894 "bdev_name": "Null4", 00:09:17.894 "name": "Null4", 00:09:17.894 "nguid": "7BAA34277259427BAC014006EACCA33D", 00:09:17.894 "uuid": "7baa3427-7259-427b-ac01-4006eacca33d" 00:09:17.894 } 00:09:17.894 ] 00:09:17.894 } 00:09:17.894 ] 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@42 -- # seq 1 4 00:09:17.894 21:12:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:17.894 21:12:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:17.894 21:12:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:17.894 21:12:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:17.894 21:12:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.894 21:12:52 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:17.894 21:12:52 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:17.894 21:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.894 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 21:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.153 21:12:52 -- target/discovery.sh@49 -- # check_bdevs= 00:09:18.153 21:12:52 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:18.153 21:12:52 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:18.153 21:12:52 -- target/discovery.sh@57 -- # nvmftestfini 00:09:18.153 21:12:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:18.153 21:12:52 -- nvmf/common.sh@116 -- # sync 00:09:18.153 21:12:52 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:18.153 21:12:52 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:18.153 21:12:52 -- nvmf/common.sh@119 -- # set +e 00:09:18.153 21:12:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:18.153 21:12:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:18.153 rmmod nvme_rdma 00:09:18.153 rmmod nvme_fabrics 00:09:18.153 21:12:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:18.153 21:12:52 -- nvmf/common.sh@123 -- # set -e 00:09:18.153 21:12:52 -- nvmf/common.sh@124 -- # return 0 00:09:18.153 21:12:52 -- nvmf/common.sh@477 -- # '[' -n 1546344 ']' 00:09:18.153 21:12:52 -- nvmf/common.sh@478 -- # killprocess 1546344 00:09:18.153 21:12:52 -- common/autotest_common.sh@926 -- # '[' -z 1546344 ']' 00:09:18.153 21:12:52 -- common/autotest_common.sh@930 -- # kill -0 1546344 00:09:18.153 21:12:52 -- common/autotest_common.sh@931 -- # uname 00:09:18.153 21:12:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:18.153 21:12:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1546344 00:09:18.153 21:12:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:18.153 21:12:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:18.153 21:12:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1546344' 00:09:18.153 killing process with pid 1546344 00:09:18.154 21:12:52 -- common/autotest_common.sh@945 -- # kill 1546344 00:09:18.154 [2024-07-26 21:12:52.884900] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:18.154 21:12:52 -- common/autotest_common.sh@950 -- # wait 1546344 00:09:18.412 21:12:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:18.412 21:12:53 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:18.412 00:09:18.412 real 0m9.920s 00:09:18.412 user 0m8.753s 00:09:18.412 sys 0m6.621s 00:09:18.412 21:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.412 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:09:18.412 ************************************ 00:09:18.412 END TEST nvmf_discovery 00:09:18.412 ************************************ 00:09:18.412 21:12:53 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:18.412 21:12:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:18.412 21:12:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:18.412 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:09:18.412 ************************************ 00:09:18.412 START TEST nvmf_referrals 00:09:18.412 ************************************ 00:09:18.412 21:12:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:18.412 * Looking for test storage... 00:09:18.412 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:18.412 21:12:53 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.412 21:12:53 -- nvmf/common.sh@7 -- # uname -s 00:09:18.671 21:12:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.671 21:12:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.671 21:12:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.671 21:12:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.671 21:12:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.671 21:12:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.671 21:12:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.671 21:12:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.671 21:12:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.671 21:12:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.671 21:12:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:18.671 21:12:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:18.671 21:12:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.671 21:12:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.671 21:12:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.671 21:12:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:18.671 21:12:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.671 21:12:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.671 21:12:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.671 21:12:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.671 21:12:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.671 21:12:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.671 21:12:53 -- paths/export.sh@5 -- # export PATH 00:09:18.671 21:12:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.671 21:12:53 -- nvmf/common.sh@46 -- # : 0 00:09:18.671 21:12:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:18.671 21:12:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:18.671 21:12:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:18.671 21:12:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.671 21:12:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.671 21:12:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:18.671 21:12:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:18.671 21:12:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:18.671 21:12:53 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:18.671 21:12:53 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:18.671 21:12:53 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:18.671 21:12:53 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:18.671 21:12:53 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:18.672 21:12:53 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:18.672 21:12:53 -- target/referrals.sh@37 -- # nvmftestinit 00:09:18.672 21:12:53 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:18.672 21:12:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.672 21:12:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:18.672 21:12:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:18.672 21:12:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:18.672 21:12:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.672 21:12:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.672 21:12:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.672 21:12:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:18.672 21:12:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:18.672 21:12:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:18.672 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:09:26.860 21:13:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:26.860 21:13:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:26.860 21:13:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:26.860 21:13:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:26.860 21:13:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:26.860 21:13:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:26.860 21:13:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:26.860 21:13:01 -- nvmf/common.sh@294 -- # net_devs=() 00:09:26.860 21:13:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:26.860 21:13:01 -- nvmf/common.sh@295 -- # e810=() 00:09:26.860 21:13:01 -- nvmf/common.sh@295 -- # local -ga e810 00:09:26.860 21:13:01 -- nvmf/common.sh@296 -- # x722=() 00:09:26.860 21:13:01 -- nvmf/common.sh@296 -- # local -ga x722 00:09:26.860 21:13:01 -- nvmf/common.sh@297 -- # mlx=() 00:09:26.860 21:13:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:26.860 21:13:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.860 21:13:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:26.860 21:13:01 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:26.860 21:13:01 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:26.860 21:13:01 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:26.860 21:13:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:26.860 21:13:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:26.860 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:26.860 21:13:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.860 21:13:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:26.860 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:26.860 21:13:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.860 21:13:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:26.860 21:13:01 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.860 21:13:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:26.860 21:13:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.860 21:13:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:26.860 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:26.860 21:13:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.860 21:13:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.860 21:13:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:26.860 21:13:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.860 21:13:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:26.860 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:26.860 21:13:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.860 21:13:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:26.860 21:13:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:26.860 21:13:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:26.860 21:13:01 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:26.860 21:13:01 -- nvmf/common.sh@57 -- # uname 00:09:26.860 21:13:01 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:26.860 21:13:01 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:26.860 21:13:01 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:26.860 21:13:01 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:26.860 21:13:01 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:26.860 21:13:01 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:26.860 21:13:01 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:26.860 21:13:01 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:26.860 21:13:01 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:26.860 21:13:01 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:26.860 21:13:01 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:26.860 21:13:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.860 21:13:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:26.860 21:13:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:26.860 21:13:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.860 21:13:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:26.860 21:13:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:26.860 21:13:01 -- nvmf/common.sh@104 -- # continue 2 00:09:26.860 21:13:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.860 21:13:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:26.860 21:13:01 -- nvmf/common.sh@104 -- # continue 2 00:09:26.860 21:13:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:26.860 21:13:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:26.860 21:13:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:26.860 21:13:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:26.860 21:13:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:26.860 21:13:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:26.860 21:13:01 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:26.860 21:13:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:26.860 21:13:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:26.860 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.860 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:26.860 altname enp217s0f0np0 00:09:26.860 altname ens818f0np0 00:09:26.860 inet 192.168.100.8/24 scope global mlx_0_0 00:09:26.860 valid_lft forever preferred_lft forever 00:09:26.860 21:13:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:26.861 21:13:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:26.861 21:13:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:26.861 21:13:01 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:26.861 21:13:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:26.861 21:13:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:26.861 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.861 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:26.861 altname enp217s0f1np1 00:09:26.861 altname ens818f1np1 00:09:26.861 inet 192.168.100.9/24 scope global mlx_0_1 00:09:26.861 valid_lft forever preferred_lft forever 00:09:26.861 21:13:01 -- nvmf/common.sh@410 -- # return 0 00:09:26.861 21:13:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:26.861 21:13:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:26.861 21:13:01 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:26.861 21:13:01 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:26.861 21:13:01 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:26.861 21:13:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.861 21:13:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:26.861 21:13:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:26.861 21:13:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.861 21:13:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:26.861 21:13:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:26.861 21:13:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.861 21:13:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.861 21:13:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:26.861 21:13:01 -- nvmf/common.sh@104 -- # continue 2 00:09:26.861 21:13:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:26.861 21:13:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.861 21:13:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.861 21:13:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.861 21:13:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.861 21:13:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:26.861 21:13:01 -- nvmf/common.sh@104 -- # continue 2 00:09:26.861 21:13:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:26.861 21:13:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:26.861 21:13:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:26.861 21:13:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:26.861 21:13:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:26.861 21:13:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:26.861 21:13:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:26.861 21:13:01 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:26.861 192.168.100.9' 00:09:26.861 21:13:01 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:26.861 192.168.100.9' 00:09:26.861 21:13:01 -- nvmf/common.sh@445 -- # head -n 1 00:09:26.861 21:13:01 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:26.861 21:13:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:26.861 192.168.100.9' 00:09:26.861 21:13:01 -- nvmf/common.sh@446 -- # tail -n +2 00:09:26.861 21:13:01 -- nvmf/common.sh@446 -- # head -n 1 00:09:26.861 21:13:01 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:26.861 21:13:01 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:26.861 21:13:01 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:26.861 21:13:01 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:26.861 21:13:01 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:26.861 21:13:01 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:26.861 21:13:01 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:26.861 21:13:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:26.861 21:13:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:26.861 21:13:01 -- common/autotest_common.sh@10 -- # set +x 00:09:26.861 21:13:01 -- nvmf/common.sh@469 -- # nvmfpid=1550661 00:09:26.861 21:13:01 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.861 21:13:01 -- nvmf/common.sh@470 -- # waitforlisten 1550661 00:09:26.861 21:13:01 -- common/autotest_common.sh@819 -- # '[' -z 1550661 ']' 00:09:26.861 21:13:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.861 21:13:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:26.861 21:13:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.861 21:13:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:26.861 21:13:01 -- common/autotest_common.sh@10 -- # set +x 00:09:26.861 [2024-07-26 21:13:01.534566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:26.861 [2024-07-26 21:13:01.534622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.861 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.861 [2024-07-26 21:13:01.619503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.861 [2024-07-26 21:13:01.657526] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.861 [2024-07-26 21:13:01.657655] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.861 [2024-07-26 21:13:01.657665] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.861 [2024-07-26 21:13:01.657674] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.861 [2024-07-26 21:13:01.657729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.861 [2024-07-26 21:13:01.657824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.861 [2024-07-26 21:13:01.657890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.861 [2024-07-26 21:13:01.657891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.798 21:13:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.798 21:13:02 -- common/autotest_common.sh@852 -- # return 0 00:09:27.798 21:13:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:27.798 21:13:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 21:13:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.798 21:13:02 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:27.798 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 [2024-07-26 21:13:02.412945] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10f7060/0x10fb550) succeed. 00:09:27.798 [2024-07-26 21:13:02.423264] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10f8650/0x113cbe0) succeed. 00:09:27.798 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.798 21:13:02 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:27.798 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 [2024-07-26 21:13:02.547412] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:27.798 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.798 21:13:02 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:27.798 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.798 21:13:02 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:27.798 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.798 21:13:02 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:27.798 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.798 21:13:02 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:27.798 21:13:02 -- target/referrals.sh@48 -- # jq length 00:09:27.798 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.798 21:13:02 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:27.798 21:13:02 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:27.798 21:13:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:27.798 21:13:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:27.798 21:13:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:27.798 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.798 21:13:02 -- target/referrals.sh@21 -- # sort 00:09:27.798 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:27.798 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.798 21:13:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:27.798 21:13:02 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:28.058 21:13:02 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:28.058 21:13:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.058 21:13:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.058 21:13:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.058 21:13:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.058 21:13:02 -- target/referrals.sh@26 -- # sort 00:09:28.058 21:13:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:28.058 21:13:02 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:28.058 21:13:02 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:28.058 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.058 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.058 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.058 21:13:02 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:28.058 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.058 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.058 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.058 21:13:02 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:28.058 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.058 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.058 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.058 21:13:02 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.058 21:13:02 -- target/referrals.sh@56 -- # jq length 00:09:28.058 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.058 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.058 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.058 21:13:02 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:28.058 21:13:02 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:28.058 21:13:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.058 21:13:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.058 21:13:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.058 21:13:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.058 21:13:02 -- target/referrals.sh@26 -- # sort 00:09:28.317 21:13:02 -- target/referrals.sh@26 -- # echo 00:09:28.317 21:13:02 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:28.317 21:13:02 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:28.317 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.317 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.317 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.317 21:13:02 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:28.317 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.317 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.317 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.317 21:13:02 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:28.317 21:13:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:28.317 21:13:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.317 21:13:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:28.317 21:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.317 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 21:13:02 -- target/referrals.sh@21 -- # sort 00:09:28.318 21:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.318 21:13:03 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:28.318 21:13:03 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:28.318 21:13:03 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:28.318 21:13:03 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.318 21:13:03 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.318 21:13:03 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.318 21:13:03 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.318 21:13:03 -- target/referrals.sh@26 -- # sort 00:09:28.318 21:13:03 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:28.318 21:13:03 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:28.318 21:13:03 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:28.318 21:13:03 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:28.318 21:13:03 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:28.318 21:13:03 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:28.318 21:13:03 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.580 21:13:03 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:28.580 21:13:03 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:28.580 21:13:03 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:28.580 21:13:03 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:28.580 21:13:03 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.580 21:13:03 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:28.580 21:13:03 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:28.581 21:13:03 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:28.581 21:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.581 21:13:03 -- common/autotest_common.sh@10 -- # set +x 00:09:28.581 21:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.581 21:13:03 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:28.581 21:13:03 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:28.581 21:13:03 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.581 21:13:03 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:28.581 21:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.581 21:13:03 -- target/referrals.sh@21 -- # sort 00:09:28.581 21:13:03 -- common/autotest_common.sh@10 -- # set +x 00:09:28.581 21:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.581 21:13:03 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:28.581 21:13:03 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:28.581 21:13:03 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:28.581 21:13:03 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.581 21:13:03 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.581 21:13:03 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.581 21:13:03 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.581 21:13:03 -- target/referrals.sh@26 -- # sort 00:09:28.581 21:13:03 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:28.581 21:13:03 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:28.581 21:13:03 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:28.581 21:13:03 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:28.581 21:13:03 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:28.581 21:13:03 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:28.581 21:13:03 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.839 21:13:03 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:28.839 21:13:03 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:28.839 21:13:03 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:28.839 21:13:03 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:28.839 21:13:03 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:28.839 21:13:03 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.840 21:13:03 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:28.840 21:13:03 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:28.840 21:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.840 21:13:03 -- common/autotest_common.sh@10 -- # set +x 00:09:28.840 21:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.840 21:13:03 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.840 21:13:03 -- target/referrals.sh@82 -- # jq length 00:09:28.840 21:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.840 21:13:03 -- common/autotest_common.sh@10 -- # set +x 00:09:28.840 21:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.840 21:13:03 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:28.840 21:13:03 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:28.840 21:13:03 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.840 21:13:03 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.840 21:13:03 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.840 21:13:03 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.840 21:13:03 -- target/referrals.sh@26 -- # sort 00:09:29.099 21:13:03 -- target/referrals.sh@26 -- # echo 00:09:29.099 21:13:03 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:29.099 21:13:03 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:29.099 21:13:03 -- target/referrals.sh@86 -- # nvmftestfini 00:09:29.099 21:13:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:29.099 21:13:03 -- nvmf/common.sh@116 -- # sync 00:09:29.099 21:13:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:29.099 21:13:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:29.099 21:13:03 -- nvmf/common.sh@119 -- # set +e 00:09:29.099 21:13:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:29.099 21:13:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:29.099 rmmod nvme_rdma 00:09:29.099 rmmod nvme_fabrics 00:09:29.099 21:13:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:29.099 21:13:03 -- nvmf/common.sh@123 -- # set -e 00:09:29.099 21:13:03 -- nvmf/common.sh@124 -- # return 0 00:09:29.099 21:13:03 -- nvmf/common.sh@477 -- # '[' -n 1550661 ']' 00:09:29.099 21:13:03 -- nvmf/common.sh@478 -- # killprocess 1550661 00:09:29.099 21:13:03 -- common/autotest_common.sh@926 -- # '[' -z 1550661 ']' 00:09:29.099 21:13:03 -- common/autotest_common.sh@930 -- # kill -0 1550661 00:09:29.099 21:13:03 -- common/autotest_common.sh@931 -- # uname 00:09:29.099 21:13:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:29.099 21:13:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1550661 00:09:29.099 21:13:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:29.099 21:13:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:29.099 21:13:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1550661' 00:09:29.099 killing process with pid 1550661 00:09:29.099 21:13:03 -- common/autotest_common.sh@945 -- # kill 1550661 00:09:29.099 21:13:03 -- common/autotest_common.sh@950 -- # wait 1550661 00:09:29.358 21:13:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:29.358 21:13:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:29.358 00:09:29.358 real 0m10.967s 00:09:29.358 user 0m12.790s 00:09:29.358 sys 0m7.075s 00:09:29.358 21:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.359 21:13:04 -- common/autotest_common.sh@10 -- # set +x 00:09:29.359 ************************************ 00:09:29.359 END TEST nvmf_referrals 00:09:29.359 ************************************ 00:09:29.359 21:13:04 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:29.359 21:13:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:29.359 21:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.359 21:13:04 -- common/autotest_common.sh@10 -- # set +x 00:09:29.359 ************************************ 00:09:29.359 START TEST nvmf_connect_disconnect 00:09:29.359 ************************************ 00:09:29.359 21:13:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:29.619 * Looking for test storage... 00:09:29.619 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:29.619 21:13:04 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.619 21:13:04 -- nvmf/common.sh@7 -- # uname -s 00:09:29.619 21:13:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.619 21:13:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.619 21:13:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.619 21:13:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.619 21:13:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.619 21:13:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.619 21:13:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.619 21:13:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.619 21:13:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.619 21:13:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.619 21:13:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:29.619 21:13:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:29.619 21:13:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.619 21:13:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.619 21:13:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.619 21:13:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:29.619 21:13:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.619 21:13:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.619 21:13:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.619 21:13:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.619 21:13:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.619 21:13:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.619 21:13:04 -- paths/export.sh@5 -- # export PATH 00:09:29.619 21:13:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.619 21:13:04 -- nvmf/common.sh@46 -- # : 0 00:09:29.619 21:13:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:29.619 21:13:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:29.619 21:13:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:29.619 21:13:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.619 21:13:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.619 21:13:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:29.619 21:13:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:29.619 21:13:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:29.619 21:13:04 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.619 21:13:04 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.619 21:13:04 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:29.619 21:13:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:29.619 21:13:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.619 21:13:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:29.619 21:13:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:29.619 21:13:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:29.619 21:13:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.619 21:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.619 21:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.619 21:13:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:29.619 21:13:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:29.619 21:13:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:29.619 21:13:04 -- common/autotest_common.sh@10 -- # set +x 00:09:37.749 21:13:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:37.749 21:13:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:37.749 21:13:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:37.749 21:13:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:37.749 21:13:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:37.749 21:13:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:37.749 21:13:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:37.749 21:13:12 -- nvmf/common.sh@294 -- # net_devs=() 00:09:37.749 21:13:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:37.749 21:13:12 -- nvmf/common.sh@295 -- # e810=() 00:09:37.749 21:13:12 -- nvmf/common.sh@295 -- # local -ga e810 00:09:37.749 21:13:12 -- nvmf/common.sh@296 -- # x722=() 00:09:37.749 21:13:12 -- nvmf/common.sh@296 -- # local -ga x722 00:09:37.749 21:13:12 -- nvmf/common.sh@297 -- # mlx=() 00:09:37.749 21:13:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:37.749 21:13:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.749 21:13:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:37.749 21:13:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:37.749 21:13:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:37.749 21:13:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:37.749 21:13:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:37.749 21:13:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:37.749 21:13:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:37.749 21:13:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:37.749 21:13:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:37.749 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:37.749 21:13:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:37.749 21:13:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:37.749 21:13:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:37.749 21:13:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.750 21:13:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:37.750 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:37.750 21:13:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.750 21:13:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:37.750 21:13:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.750 21:13:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:37.750 21:13:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.750 21:13:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:37.750 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:37.750 21:13:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.750 21:13:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.750 21:13:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:37.750 21:13:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.750 21:13:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:37.750 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:37.750 21:13:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.750 21:13:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:37.750 21:13:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:37.750 21:13:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:37.750 21:13:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:37.750 21:13:12 -- nvmf/common.sh@57 -- # uname 00:09:37.750 21:13:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:37.750 21:13:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:37.750 21:13:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:37.750 21:13:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:37.750 21:13:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:37.750 21:13:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:37.750 21:13:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:37.750 21:13:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:37.750 21:13:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:37.750 21:13:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:37.750 21:13:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:37.750 21:13:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.750 21:13:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:37.750 21:13:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:37.750 21:13:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.750 21:13:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:37.750 21:13:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:37.750 21:13:12 -- nvmf/common.sh@104 -- # continue 2 00:09:37.750 21:13:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:37.750 21:13:12 -- nvmf/common.sh@104 -- # continue 2 00:09:37.750 21:13:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:37.750 21:13:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:37.750 21:13:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:37.750 21:13:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:37.750 21:13:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:37.750 21:13:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:37.750 21:13:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:37.750 21:13:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:37.750 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.750 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:37.750 altname enp217s0f0np0 00:09:37.750 altname ens818f0np0 00:09:37.750 inet 192.168.100.8/24 scope global mlx_0_0 00:09:37.750 valid_lft forever preferred_lft forever 00:09:37.750 21:13:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:37.750 21:13:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:37.750 21:13:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:37.750 21:13:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:37.750 21:13:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:37.750 21:13:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:37.750 21:13:12 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:37.750 21:13:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:37.750 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.750 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:37.750 altname enp217s0f1np1 00:09:37.750 altname ens818f1np1 00:09:37.750 inet 192.168.100.9/24 scope global mlx_0_1 00:09:37.750 valid_lft forever preferred_lft forever 00:09:37.750 21:13:12 -- nvmf/common.sh@410 -- # return 0 00:09:37.750 21:13:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:37.750 21:13:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:37.750 21:13:12 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:37.750 21:13:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:37.750 21:13:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.750 21:13:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:37.750 21:13:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:37.750 21:13:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.750 21:13:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:37.750 21:13:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:37.750 21:13:12 -- nvmf/common.sh@104 -- # continue 2 00:09:37.750 21:13:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.750 21:13:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.750 21:13:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.010 21:13:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.010 21:13:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:38.010 21:13:12 -- nvmf/common.sh@104 -- # continue 2 00:09:38.010 21:13:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:38.010 21:13:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:38.010 21:13:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:38.010 21:13:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:38.010 21:13:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:38.010 21:13:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:38.010 21:13:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:38.010 21:13:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:38.010 21:13:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:38.010 21:13:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:38.010 21:13:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:38.010 21:13:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:38.010 21:13:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:38.010 192.168.100.9' 00:09:38.010 21:13:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:38.010 192.168.100.9' 00:09:38.010 21:13:12 -- nvmf/common.sh@445 -- # head -n 1 00:09:38.010 21:13:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:38.010 21:13:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:38.010 192.168.100.9' 00:09:38.010 21:13:12 -- nvmf/common.sh@446 -- # tail -n +2 00:09:38.010 21:13:12 -- nvmf/common.sh@446 -- # head -n 1 00:09:38.010 21:13:12 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:38.011 21:13:12 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:38.011 21:13:12 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:38.011 21:13:12 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:38.011 21:13:12 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:38.011 21:13:12 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:38.011 21:13:12 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:38.011 21:13:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:38.011 21:13:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:38.011 21:13:12 -- common/autotest_common.sh@10 -- # set +x 00:09:38.011 21:13:12 -- nvmf/common.sh@469 -- # nvmfpid=1555228 00:09:38.011 21:13:12 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.011 21:13:12 -- nvmf/common.sh@470 -- # waitforlisten 1555228 00:09:38.011 21:13:12 -- common/autotest_common.sh@819 -- # '[' -z 1555228 ']' 00:09:38.011 21:13:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.011 21:13:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.011 21:13:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.011 21:13:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.011 21:13:12 -- common/autotest_common.sh@10 -- # set +x 00:09:38.011 [2024-07-26 21:13:12.745362] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:38.011 [2024-07-26 21:13:12.745423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.011 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.011 [2024-07-26 21:13:12.835401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.011 [2024-07-26 21:13:12.872675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.011 [2024-07-26 21:13:12.872792] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.011 [2024-07-26 21:13:12.872801] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.011 [2024-07-26 21:13:12.872810] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.011 [2024-07-26 21:13:12.872861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.011 [2024-07-26 21:13:12.872958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.011 [2024-07-26 21:13:12.873041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.011 [2024-07-26 21:13:12.873043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.946 21:13:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.946 21:13:13 -- common/autotest_common.sh@852 -- # return 0 00:09:38.946 21:13:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:38.946 21:13:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:38.946 21:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:38.947 21:13:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:38.947 21:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.947 21:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:38.947 [2024-07-26 21:13:13.602010] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:38.947 [2024-07-26 21:13:13.623839] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x211fec0/0x21243b0) succeed. 00:09:38.947 [2024-07-26 21:13:13.634128] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21214b0/0x2165a40) succeed. 00:09:38.947 21:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:38.947 21:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.947 21:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:38.947 21:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.947 21:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.947 21:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:38.947 21:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.947 21:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.947 21:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:38.947 21:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:38.947 21:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.947 21:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:38.947 [2024-07-26 21:13:13.775170] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:38.947 21:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:38.947 21:13:13 -- target/connect_disconnect.sh@34 -- # set +x 00:09:42.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.260 21:18:27 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:53.260 21:18:27 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:53.260 21:18:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.260 21:18:27 -- nvmf/common.sh@116 -- # sync 00:14:53.260 21:18:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:53.260 21:18:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:53.260 21:18:27 -- nvmf/common.sh@119 -- # set +e 00:14:53.260 21:18:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.260 21:18:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:53.260 rmmod nvme_rdma 00:14:53.260 rmmod nvme_fabrics 00:14:53.260 21:18:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.260 21:18:27 -- nvmf/common.sh@123 -- # set -e 00:14:53.260 21:18:27 -- nvmf/common.sh@124 -- # return 0 00:14:53.260 21:18:27 -- nvmf/common.sh@477 -- # '[' -n 1555228 ']' 00:14:53.260 21:18:27 -- nvmf/common.sh@478 -- # killprocess 1555228 00:14:53.260 21:18:27 -- common/autotest_common.sh@926 -- # '[' -z 1555228 ']' 00:14:53.260 21:18:27 -- common/autotest_common.sh@930 -- # kill -0 1555228 00:14:53.260 21:18:27 -- common/autotest_common.sh@931 -- # uname 00:14:53.260 21:18:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.260 21:18:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1555228 00:14:53.260 21:18:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:53.260 21:18:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:53.260 21:18:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1555228' 00:14:53.260 killing process with pid 1555228 00:14:53.260 21:18:27 -- common/autotest_common.sh@945 -- # kill 1555228 00:14:53.260 21:18:27 -- common/autotest_common.sh@950 -- # wait 1555228 00:14:53.260 21:18:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.260 21:18:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:53.260 00:14:53.260 real 5m23.610s 00:14:53.260 user 20m57.050s 00:14:53.260 sys 0m18.270s 00:14:53.260 21:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.260 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:14:53.260 ************************************ 00:14:53.260 END TEST nvmf_connect_disconnect 00:14:53.260 ************************************ 00:14:53.260 21:18:27 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:53.260 21:18:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:53.260 21:18:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:53.260 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:14:53.260 ************************************ 00:14:53.260 START TEST nvmf_multitarget 00:14:53.260 ************************************ 00:14:53.260 21:18:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:53.260 * Looking for test storage... 00:14:53.260 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:53.260 21:18:27 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.260 21:18:27 -- nvmf/common.sh@7 -- # uname -s 00:14:53.260 21:18:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.260 21:18:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.260 21:18:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.260 21:18:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.260 21:18:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.260 21:18:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.260 21:18:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.260 21:18:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.260 21:18:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.260 21:18:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.260 21:18:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:53.260 21:18:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:53.260 21:18:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.260 21:18:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.260 21:18:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.260 21:18:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:53.260 21:18:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.260 21:18:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.260 21:18:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.261 21:18:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.261 21:18:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.261 21:18:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.261 21:18:27 -- paths/export.sh@5 -- # export PATH 00:14:53.261 21:18:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.261 21:18:27 -- nvmf/common.sh@46 -- # : 0 00:14:53.261 21:18:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.261 21:18:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.261 21:18:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.261 21:18:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.261 21:18:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.261 21:18:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.261 21:18:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.261 21:18:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.261 21:18:27 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:53.261 21:18:27 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:53.261 21:18:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:53.261 21:18:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.261 21:18:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.261 21:18:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.261 21:18:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.261 21:18:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.261 21:18:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.261 21:18:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.261 21:18:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:53.261 21:18:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:53.261 21:18:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:53.261 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:01.392 21:18:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:01.392 21:18:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:01.392 21:18:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:01.392 21:18:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:01.392 21:18:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:01.392 21:18:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:01.392 21:18:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:01.392 21:18:35 -- nvmf/common.sh@294 -- # net_devs=() 00:15:01.392 21:18:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:01.392 21:18:35 -- nvmf/common.sh@295 -- # e810=() 00:15:01.392 21:18:35 -- nvmf/common.sh@295 -- # local -ga e810 00:15:01.392 21:18:35 -- nvmf/common.sh@296 -- # x722=() 00:15:01.392 21:18:35 -- nvmf/common.sh@296 -- # local -ga x722 00:15:01.392 21:18:35 -- nvmf/common.sh@297 -- # mlx=() 00:15:01.392 21:18:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:01.392 21:18:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.392 21:18:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:01.392 21:18:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:01.392 21:18:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:01.392 21:18:35 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:01.392 21:18:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:01.392 21:18:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:01.392 21:18:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:01.392 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:01.392 21:18:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:01.392 21:18:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:01.392 21:18:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:01.392 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:01.392 21:18:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:01.392 21:18:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:01.392 21:18:35 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:01.392 21:18:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.392 21:18:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:01.392 21:18:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.392 21:18:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:01.392 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:01.392 21:18:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.392 21:18:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:01.392 21:18:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.392 21:18:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:01.392 21:18:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.392 21:18:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:01.392 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:01.392 21:18:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.392 21:18:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:01.392 21:18:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:01.392 21:18:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:01.392 21:18:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:01.393 21:18:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:01.393 21:18:35 -- nvmf/common.sh@57 -- # uname 00:15:01.393 21:18:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:01.393 21:18:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:01.393 21:18:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:01.393 21:18:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:01.393 21:18:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:01.393 21:18:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:01.393 21:18:35 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:01.393 21:18:35 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:01.393 21:18:35 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:01.393 21:18:35 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:01.393 21:18:35 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:01.393 21:18:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:01.393 21:18:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:01.393 21:18:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:01.393 21:18:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:01.393 21:18:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:01.393 21:18:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@104 -- # continue 2 00:15:01.393 21:18:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@104 -- # continue 2 00:15:01.393 21:18:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:01.393 21:18:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:01.393 21:18:35 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:01.393 21:18:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:01.393 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:01.393 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:01.393 altname enp217s0f0np0 00:15:01.393 altname ens818f0np0 00:15:01.393 inet 192.168.100.8/24 scope global mlx_0_0 00:15:01.393 valid_lft forever preferred_lft forever 00:15:01.393 21:18:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:01.393 21:18:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:01.393 21:18:35 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:01.393 21:18:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:01.393 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:01.393 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:01.393 altname enp217s0f1np1 00:15:01.393 altname ens818f1np1 00:15:01.393 inet 192.168.100.9/24 scope global mlx_0_1 00:15:01.393 valid_lft forever preferred_lft forever 00:15:01.393 21:18:35 -- nvmf/common.sh@410 -- # return 0 00:15:01.393 21:18:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:01.393 21:18:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:01.393 21:18:35 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:01.393 21:18:35 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:01.393 21:18:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:01.393 21:18:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:01.393 21:18:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:01.393 21:18:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:01.393 21:18:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:01.393 21:18:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@104 -- # continue 2 00:15:01.393 21:18:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.393 21:18:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:01.393 21:18:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@104 -- # continue 2 00:15:01.393 21:18:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:01.393 21:18:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:01.393 21:18:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:01.393 21:18:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:01.393 21:18:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:01.393 21:18:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:01.393 192.168.100.9' 00:15:01.393 21:18:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:01.393 192.168.100.9' 00:15:01.393 21:18:35 -- nvmf/common.sh@445 -- # head -n 1 00:15:01.393 21:18:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:01.393 21:18:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:01.393 192.168.100.9' 00:15:01.393 21:18:35 -- nvmf/common.sh@446 -- # tail -n +2 00:15:01.393 21:18:35 -- nvmf/common.sh@446 -- # head -n 1 00:15:01.393 21:18:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:01.393 21:18:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:01.393 21:18:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:01.393 21:18:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:01.393 21:18:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:01.393 21:18:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:01.393 21:18:36 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:01.393 21:18:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:01.393 21:18:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:01.393 21:18:36 -- common/autotest_common.sh@10 -- # set +x 00:15:01.393 21:18:36 -- nvmf/common.sh@469 -- # nvmfpid=1615825 00:15:01.393 21:18:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.393 21:18:36 -- nvmf/common.sh@470 -- # waitforlisten 1615825 00:15:01.393 21:18:36 -- common/autotest_common.sh@819 -- # '[' -z 1615825 ']' 00:15:01.393 21:18:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.393 21:18:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:01.393 21:18:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.393 21:18:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:01.393 21:18:36 -- common/autotest_common.sh@10 -- # set +x 00:15:01.393 [2024-07-26 21:18:36.068096] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:01.393 [2024-07-26 21:18:36.068144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.393 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.393 [2024-07-26 21:18:36.156213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.393 [2024-07-26 21:18:36.195155] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:01.393 [2024-07-26 21:18:36.195266] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.393 [2024-07-26 21:18:36.195277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.393 [2024-07-26 21:18:36.195286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.393 [2024-07-26 21:18:36.195336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.393 [2024-07-26 21:18:36.195353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.393 [2024-07-26 21:18:36.195441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.393 [2024-07-26 21:18:36.195443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.331 21:18:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:02.331 21:18:36 -- common/autotest_common.sh@852 -- # return 0 00:15:02.331 21:18:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:02.331 21:18:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:02.331 21:18:36 -- common/autotest_common.sh@10 -- # set +x 00:15:02.331 21:18:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.331 21:18:36 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:02.331 21:18:36 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:02.331 21:18:36 -- target/multitarget.sh@21 -- # jq length 00:15:02.331 21:18:37 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:02.331 21:18:37 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:02.331 "nvmf_tgt_1" 00:15:02.331 21:18:37 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:02.331 "nvmf_tgt_2" 00:15:02.590 21:18:37 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:02.590 21:18:37 -- target/multitarget.sh@28 -- # jq length 00:15:02.590 21:18:37 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:02.590 21:18:37 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:02.590 true 00:15:02.590 21:18:37 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:02.849 true 00:15:02.849 21:18:37 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:02.849 21:18:37 -- target/multitarget.sh@35 -- # jq length 00:15:02.849 21:18:37 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:02.849 21:18:37 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:02.849 21:18:37 -- target/multitarget.sh@41 -- # nvmftestfini 00:15:02.849 21:18:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.849 21:18:37 -- nvmf/common.sh@116 -- # sync 00:15:02.849 21:18:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:02.849 21:18:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:02.849 21:18:37 -- nvmf/common.sh@119 -- # set +e 00:15:02.849 21:18:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.849 21:18:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:02.849 rmmod nvme_rdma 00:15:02.849 rmmod nvme_fabrics 00:15:02.849 21:18:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.849 21:18:37 -- nvmf/common.sh@123 -- # set -e 00:15:02.849 21:18:37 -- nvmf/common.sh@124 -- # return 0 00:15:02.849 21:18:37 -- nvmf/common.sh@477 -- # '[' -n 1615825 ']' 00:15:02.849 21:18:37 -- nvmf/common.sh@478 -- # killprocess 1615825 00:15:02.849 21:18:37 -- common/autotest_common.sh@926 -- # '[' -z 1615825 ']' 00:15:02.849 21:18:37 -- common/autotest_common.sh@930 -- # kill -0 1615825 00:15:02.849 21:18:37 -- common/autotest_common.sh@931 -- # uname 00:15:02.849 21:18:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.849 21:18:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1615825 00:15:03.108 21:18:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:03.108 21:18:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:03.108 21:18:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1615825' 00:15:03.108 killing process with pid 1615825 00:15:03.108 21:18:37 -- common/autotest_common.sh@945 -- # kill 1615825 00:15:03.108 21:18:37 -- common/autotest_common.sh@950 -- # wait 1615825 00:15:03.108 21:18:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:03.108 21:18:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:03.108 00:15:03.108 real 0m10.033s 00:15:03.108 user 0m9.637s 00:15:03.108 sys 0m6.680s 00:15:03.108 21:18:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.108 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:15:03.108 ************************************ 00:15:03.108 END TEST nvmf_multitarget 00:15:03.108 ************************************ 00:15:03.108 21:18:37 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:15:03.108 21:18:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:03.108 21:18:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:03.108 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:15:03.108 ************************************ 00:15:03.108 START TEST nvmf_rpc 00:15:03.108 ************************************ 00:15:03.108 21:18:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:15:03.367 * Looking for test storage... 00:15:03.367 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:03.367 21:18:38 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.367 21:18:38 -- nvmf/common.sh@7 -- # uname -s 00:15:03.367 21:18:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.367 21:18:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.367 21:18:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.367 21:18:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.367 21:18:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.367 21:18:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.367 21:18:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.367 21:18:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.367 21:18:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.367 21:18:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.367 21:18:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:03.367 21:18:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:03.367 21:18:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.367 21:18:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.367 21:18:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.367 21:18:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:03.367 21:18:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.367 21:18:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.367 21:18:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.367 21:18:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.367 21:18:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.367 21:18:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.367 21:18:38 -- paths/export.sh@5 -- # export PATH 00:15:03.367 21:18:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.367 21:18:38 -- nvmf/common.sh@46 -- # : 0 00:15:03.367 21:18:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:03.367 21:18:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:03.367 21:18:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:03.367 21:18:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.367 21:18:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.367 21:18:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:03.367 21:18:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:03.367 21:18:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:03.367 21:18:38 -- target/rpc.sh@11 -- # loops=5 00:15:03.367 21:18:38 -- target/rpc.sh@23 -- # nvmftestinit 00:15:03.367 21:18:38 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:03.367 21:18:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.367 21:18:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:03.367 21:18:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:03.367 21:18:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:03.367 21:18:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.367 21:18:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.367 21:18:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.367 21:18:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:03.367 21:18:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:03.367 21:18:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:03.367 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:15:11.530 21:18:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.530 21:18:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:11.530 21:18:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:11.530 21:18:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:11.530 21:18:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:11.530 21:18:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:11.530 21:18:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:11.530 21:18:45 -- nvmf/common.sh@294 -- # net_devs=() 00:15:11.530 21:18:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:11.530 21:18:45 -- nvmf/common.sh@295 -- # e810=() 00:15:11.530 21:18:45 -- nvmf/common.sh@295 -- # local -ga e810 00:15:11.530 21:18:45 -- nvmf/common.sh@296 -- # x722=() 00:15:11.530 21:18:45 -- nvmf/common.sh@296 -- # local -ga x722 00:15:11.530 21:18:45 -- nvmf/common.sh@297 -- # mlx=() 00:15:11.530 21:18:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:11.530 21:18:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.530 21:18:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:11.530 21:18:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:11.530 21:18:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:11.530 21:18:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:11.530 21:18:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:11.530 21:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.530 21:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:11.530 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:11.530 21:18:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:11.530 21:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.530 21:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:11.530 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:11.530 21:18:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:11.530 21:18:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:11.530 21:18:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.530 21:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.530 21:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.530 21:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.530 21:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:11.530 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:11.530 21:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.530 21:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.530 21:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.530 21:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.530 21:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.530 21:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:11.530 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:11.530 21:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.530 21:18:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:11.530 21:18:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:11.530 21:18:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:11.530 21:18:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:11.530 21:18:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:11.530 21:18:45 -- nvmf/common.sh@57 -- # uname 00:15:11.530 21:18:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:11.530 21:18:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:11.530 21:18:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:11.530 21:18:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:11.530 21:18:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:11.530 21:18:46 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:11.530 21:18:46 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:11.530 21:18:46 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:11.530 21:18:46 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:11.530 21:18:46 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:11.530 21:18:46 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:11.530 21:18:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:11.530 21:18:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:11.530 21:18:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:11.530 21:18:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:11.530 21:18:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:11.530 21:18:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:11.530 21:18:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.530 21:18:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:11.530 21:18:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:11.530 21:18:46 -- nvmf/common.sh@104 -- # continue 2 00:15:11.530 21:18:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:11.530 21:18:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.530 21:18:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:11.530 21:18:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.530 21:18:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:11.530 21:18:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:11.530 21:18:46 -- nvmf/common.sh@104 -- # continue 2 00:15:11.530 21:18:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:11.530 21:18:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:11.530 21:18:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:11.530 21:18:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:11.530 21:18:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:11.530 21:18:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:11.530 21:18:46 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:11.530 21:18:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:11.530 21:18:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:11.530 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:11.530 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:11.530 altname enp217s0f0np0 00:15:11.530 altname ens818f0np0 00:15:11.530 inet 192.168.100.8/24 scope global mlx_0_0 00:15:11.530 valid_lft forever preferred_lft forever 00:15:11.530 21:18:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:11.530 21:18:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:11.530 21:18:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:11.530 21:18:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:11.530 21:18:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:11.530 21:18:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:11.530 21:18:46 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:11.530 21:18:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:11.530 21:18:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:11.530 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:11.530 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:11.530 altname enp217s0f1np1 00:15:11.530 altname ens818f1np1 00:15:11.530 inet 192.168.100.9/24 scope global mlx_0_1 00:15:11.530 valid_lft forever preferred_lft forever 00:15:11.530 21:18:46 -- nvmf/common.sh@410 -- # return 0 00:15:11.530 21:18:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.530 21:18:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:11.530 21:18:46 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:11.530 21:18:46 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:11.530 21:18:46 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:11.530 21:18:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:11.531 21:18:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:11.531 21:18:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:11.531 21:18:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:11.531 21:18:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:11.531 21:18:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:11.531 21:18:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.531 21:18:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:11.531 21:18:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:11.531 21:18:46 -- nvmf/common.sh@104 -- # continue 2 00:15:11.531 21:18:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:11.531 21:18:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.531 21:18:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:11.531 21:18:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.531 21:18:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:11.531 21:18:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:11.531 21:18:46 -- nvmf/common.sh@104 -- # continue 2 00:15:11.531 21:18:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:11.531 21:18:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:11.531 21:18:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:11.531 21:18:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:11.531 21:18:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:11.531 21:18:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:11.531 21:18:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:11.531 21:18:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:11.531 21:18:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:11.531 21:18:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:11.531 21:18:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:11.531 21:18:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:11.531 21:18:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:11.531 192.168.100.9' 00:15:11.531 21:18:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:11.531 192.168.100.9' 00:15:11.531 21:18:46 -- nvmf/common.sh@445 -- # head -n 1 00:15:11.531 21:18:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:11.531 21:18:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:11.531 192.168.100.9' 00:15:11.531 21:18:46 -- nvmf/common.sh@446 -- # tail -n +2 00:15:11.531 21:18:46 -- nvmf/common.sh@446 -- # head -n 1 00:15:11.531 21:18:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:11.531 21:18:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:11.531 21:18:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:11.531 21:18:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:11.531 21:18:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:11.531 21:18:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:11.531 21:18:46 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:11.531 21:18:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.531 21:18:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:11.531 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:15:11.531 21:18:46 -- nvmf/common.sh@469 -- # nvmfpid=1620210 00:15:11.531 21:18:46 -- nvmf/common.sh@470 -- # waitforlisten 1620210 00:15:11.531 21:18:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.531 21:18:46 -- common/autotest_common.sh@819 -- # '[' -z 1620210 ']' 00:15:11.531 21:18:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.531 21:18:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:11.531 21:18:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.531 21:18:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:11.531 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:15:11.531 [2024-07-26 21:18:46.267270] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:11.531 [2024-07-26 21:18:46.267317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.531 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.531 [2024-07-26 21:18:46.355576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.531 [2024-07-26 21:18:46.393330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.531 [2024-07-26 21:18:46.393444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.531 [2024-07-26 21:18:46.393455] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.531 [2024-07-26 21:18:46.393464] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.531 [2024-07-26 21:18:46.393518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.531 [2024-07-26 21:18:46.393613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.531 [2024-07-26 21:18:46.393640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.531 [2024-07-26 21:18:46.393655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.469 21:18:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.469 21:18:47 -- common/autotest_common.sh@852 -- # return 0 00:15:12.469 21:18:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.469 21:18:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:12.469 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.469 21:18:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.469 21:18:47 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:12.469 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.469 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.469 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.469 21:18:47 -- target/rpc.sh@26 -- # stats='{ 00:15:12.469 "tick_rate": 2500000000, 00:15:12.469 "poll_groups": [ 00:15:12.469 { 00:15:12.469 "name": "nvmf_tgt_poll_group_0", 00:15:12.469 "admin_qpairs": 0, 00:15:12.469 "io_qpairs": 0, 00:15:12.469 "current_admin_qpairs": 0, 00:15:12.469 "current_io_qpairs": 0, 00:15:12.469 "pending_bdev_io": 0, 00:15:12.469 "completed_nvme_io": 0, 00:15:12.469 "transports": [] 00:15:12.469 }, 00:15:12.469 { 00:15:12.469 "name": "nvmf_tgt_poll_group_1", 00:15:12.469 "admin_qpairs": 0, 00:15:12.469 "io_qpairs": 0, 00:15:12.469 "current_admin_qpairs": 0, 00:15:12.469 "current_io_qpairs": 0, 00:15:12.469 "pending_bdev_io": 0, 00:15:12.469 "completed_nvme_io": 0, 00:15:12.469 "transports": [] 00:15:12.469 }, 00:15:12.469 { 00:15:12.469 "name": "nvmf_tgt_poll_group_2", 00:15:12.469 "admin_qpairs": 0, 00:15:12.469 "io_qpairs": 0, 00:15:12.469 "current_admin_qpairs": 0, 00:15:12.469 "current_io_qpairs": 0, 00:15:12.469 "pending_bdev_io": 0, 00:15:12.469 "completed_nvme_io": 0, 00:15:12.469 "transports": [] 00:15:12.469 }, 00:15:12.469 { 00:15:12.469 "name": "nvmf_tgt_poll_group_3", 00:15:12.469 "admin_qpairs": 0, 00:15:12.469 "io_qpairs": 0, 00:15:12.469 "current_admin_qpairs": 0, 00:15:12.469 "current_io_qpairs": 0, 00:15:12.469 "pending_bdev_io": 0, 00:15:12.469 "completed_nvme_io": 0, 00:15:12.469 "transports": [] 00:15:12.469 } 00:15:12.469 ] 00:15:12.469 }' 00:15:12.469 21:18:47 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:12.469 21:18:47 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:12.469 21:18:47 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:12.469 21:18:47 -- target/rpc.sh@15 -- # wc -l 00:15:12.469 21:18:47 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:12.469 21:18:47 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:12.469 21:18:47 -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:12.469 21:18:47 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:12.469 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.469 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.469 [2024-07-26 21:18:47.259705] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9ef080/0x9f3570) succeed. 00:15:12.469 [2024-07-26 21:18:47.270092] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9f0670/0xa34c00) succeed. 00:15:12.729 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.729 21:18:47 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:12.729 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.729 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.729 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.729 21:18:47 -- target/rpc.sh@33 -- # stats='{ 00:15:12.729 "tick_rate": 2500000000, 00:15:12.729 "poll_groups": [ 00:15:12.729 { 00:15:12.729 "name": "nvmf_tgt_poll_group_0", 00:15:12.729 "admin_qpairs": 0, 00:15:12.729 "io_qpairs": 0, 00:15:12.729 "current_admin_qpairs": 0, 00:15:12.729 "current_io_qpairs": 0, 00:15:12.729 "pending_bdev_io": 0, 00:15:12.729 "completed_nvme_io": 0, 00:15:12.729 "transports": [ 00:15:12.729 { 00:15:12.729 "trtype": "RDMA", 00:15:12.729 "pending_data_buffer": 0, 00:15:12.729 "devices": [ 00:15:12.729 { 00:15:12.729 "name": "mlx5_0", 00:15:12.729 "polls": 15594, 00:15:12.729 "idle_polls": 15594, 00:15:12.729 "completions": 0, 00:15:12.729 "requests": 0, 00:15:12.729 "request_latency": 0, 00:15:12.729 "pending_free_request": 0, 00:15:12.729 "pending_rdma_read": 0, 00:15:12.729 "pending_rdma_write": 0, 00:15:12.729 "pending_rdma_send": 0, 00:15:12.729 "total_send_wrs": 0, 00:15:12.729 "send_doorbell_updates": 0, 00:15:12.729 "total_recv_wrs": 4096, 00:15:12.729 "recv_doorbell_updates": 1 00:15:12.729 }, 00:15:12.729 { 00:15:12.729 "name": "mlx5_1", 00:15:12.729 "polls": 15594, 00:15:12.730 "idle_polls": 15594, 00:15:12.730 "completions": 0, 00:15:12.730 "requests": 0, 00:15:12.730 "request_latency": 0, 00:15:12.730 "pending_free_request": 0, 00:15:12.730 "pending_rdma_read": 0, 00:15:12.730 "pending_rdma_write": 0, 00:15:12.730 "pending_rdma_send": 0, 00:15:12.730 "total_send_wrs": 0, 00:15:12.730 "send_doorbell_updates": 0, 00:15:12.730 "total_recv_wrs": 4096, 00:15:12.730 "recv_doorbell_updates": 1 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 }, 00:15:12.730 { 00:15:12.730 "name": "nvmf_tgt_poll_group_1", 00:15:12.730 "admin_qpairs": 0, 00:15:12.730 "io_qpairs": 0, 00:15:12.730 "current_admin_qpairs": 0, 00:15:12.730 "current_io_qpairs": 0, 00:15:12.730 "pending_bdev_io": 0, 00:15:12.730 "completed_nvme_io": 0, 00:15:12.730 "transports": [ 00:15:12.730 { 00:15:12.730 "trtype": "RDMA", 00:15:12.730 "pending_data_buffer": 0, 00:15:12.730 "devices": [ 00:15:12.730 { 00:15:12.730 "name": "mlx5_0", 00:15:12.730 "polls": 9874, 00:15:12.730 "idle_polls": 9874, 00:15:12.730 "completions": 0, 00:15:12.730 "requests": 0, 00:15:12.730 "request_latency": 0, 00:15:12.730 "pending_free_request": 0, 00:15:12.730 "pending_rdma_read": 0, 00:15:12.730 "pending_rdma_write": 0, 00:15:12.730 "pending_rdma_send": 0, 00:15:12.730 "total_send_wrs": 0, 00:15:12.730 "send_doorbell_updates": 0, 00:15:12.730 "total_recv_wrs": 4096, 00:15:12.730 "recv_doorbell_updates": 1 00:15:12.730 }, 00:15:12.730 { 00:15:12.730 "name": "mlx5_1", 00:15:12.730 "polls": 9874, 00:15:12.730 "idle_polls": 9874, 00:15:12.730 "completions": 0, 00:15:12.730 "requests": 0, 00:15:12.730 "request_latency": 0, 00:15:12.730 "pending_free_request": 0, 00:15:12.730 "pending_rdma_read": 0, 00:15:12.730 "pending_rdma_write": 0, 00:15:12.730 "pending_rdma_send": 0, 00:15:12.730 "total_send_wrs": 0, 00:15:12.730 "send_doorbell_updates": 0, 00:15:12.730 "total_recv_wrs": 4096, 00:15:12.730 "recv_doorbell_updates": 1 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 }, 00:15:12.730 { 00:15:12.730 "name": "nvmf_tgt_poll_group_2", 00:15:12.730 "admin_qpairs": 0, 00:15:12.730 "io_qpairs": 0, 00:15:12.730 "current_admin_qpairs": 0, 00:15:12.730 "current_io_qpairs": 0, 00:15:12.730 "pending_bdev_io": 0, 00:15:12.730 "completed_nvme_io": 0, 00:15:12.730 "transports": [ 00:15:12.730 { 00:15:12.730 "trtype": "RDMA", 00:15:12.730 "pending_data_buffer": 0, 00:15:12.730 "devices": [ 00:15:12.730 { 00:15:12.730 "name": "mlx5_0", 00:15:12.730 "polls": 5514, 00:15:12.730 "idle_polls": 5514, 00:15:12.730 "completions": 0, 00:15:12.730 "requests": 0, 00:15:12.730 "request_latency": 0, 00:15:12.730 "pending_free_request": 0, 00:15:12.730 "pending_rdma_read": 0, 00:15:12.730 "pending_rdma_write": 0, 00:15:12.730 "pending_rdma_send": 0, 00:15:12.730 "total_send_wrs": 0, 00:15:12.730 "send_doorbell_updates": 0, 00:15:12.730 "total_recv_wrs": 4096, 00:15:12.730 "recv_doorbell_updates": 1 00:15:12.730 }, 00:15:12.730 { 00:15:12.730 "name": "mlx5_1", 00:15:12.730 "polls": 5514, 00:15:12.730 "idle_polls": 5514, 00:15:12.730 "completions": 0, 00:15:12.730 "requests": 0, 00:15:12.730 "request_latency": 0, 00:15:12.730 "pending_free_request": 0, 00:15:12.730 "pending_rdma_read": 0, 00:15:12.730 "pending_rdma_write": 0, 00:15:12.730 "pending_rdma_send": 0, 00:15:12.730 "total_send_wrs": 0, 00:15:12.730 "send_doorbell_updates": 0, 00:15:12.730 "total_recv_wrs": 4096, 00:15:12.730 "recv_doorbell_updates": 1 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 }, 00:15:12.730 { 00:15:12.730 "name": "nvmf_tgt_poll_group_3", 00:15:12.730 "admin_qpairs": 0, 00:15:12.730 "io_qpairs": 0, 00:15:12.730 "current_admin_qpairs": 0, 00:15:12.730 "current_io_qpairs": 0, 00:15:12.730 "pending_bdev_io": 0, 00:15:12.730 "completed_nvme_io": 0, 00:15:12.730 "transports": [ 00:15:12.730 { 00:15:12.730 "trtype": "RDMA", 00:15:12.730 "pending_data_buffer": 0, 00:15:12.730 "devices": [ 00:15:12.730 { 00:15:12.730 "name": "mlx5_0", 00:15:12.730 "polls": 884, 00:15:12.730 "idle_polls": 884, 00:15:12.730 "completions": 0, 00:15:12.730 "requests": 0, 00:15:12.730 "request_latency": 0, 00:15:12.730 "pending_free_request": 0, 00:15:12.730 "pending_rdma_read": 0, 00:15:12.730 "pending_rdma_write": 0, 00:15:12.730 "pending_rdma_send": 0, 00:15:12.730 "total_send_wrs": 0, 00:15:12.730 "send_doorbell_updates": 0, 00:15:12.730 "total_recv_wrs": 4096, 00:15:12.730 "recv_doorbell_updates": 1 00:15:12.730 }, 00:15:12.730 { 00:15:12.730 "name": "mlx5_1", 00:15:12.730 "polls": 884, 00:15:12.730 "idle_polls": 884, 00:15:12.730 "completions": 0, 00:15:12.730 "requests": 0, 00:15:12.730 "request_latency": 0, 00:15:12.730 "pending_free_request": 0, 00:15:12.730 "pending_rdma_read": 0, 00:15:12.730 "pending_rdma_write": 0, 00:15:12.730 "pending_rdma_send": 0, 00:15:12.730 "total_send_wrs": 0, 00:15:12.730 "send_doorbell_updates": 0, 00:15:12.730 "total_recv_wrs": 4096, 00:15:12.730 "recv_doorbell_updates": 1 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 } 00:15:12.730 ] 00:15:12.730 }' 00:15:12.730 21:18:47 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:12.730 21:18:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:12.730 21:18:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:12.730 21:18:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:12.730 21:18:47 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:12.730 21:18:47 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:12.730 21:18:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:12.730 21:18:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:12.730 21:18:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:12.730 21:18:47 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:12.730 21:18:47 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:15:12.730 21:18:47 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:15:12.730 21:18:47 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:15:12.730 21:18:47 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:15:12.730 21:18:47 -- target/rpc.sh@15 -- # wc -l 00:15:12.730 21:18:47 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:15:12.730 21:18:47 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:15:12.990 21:18:47 -- target/rpc.sh@41 -- # transport_type=RDMA 00:15:12.990 21:18:47 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:15:12.990 21:18:47 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:15:12.990 21:18:47 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:15:12.990 21:18:47 -- target/rpc.sh@15 -- # wc -l 00:15:12.990 21:18:47 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:15:12.990 21:18:47 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:15:12.990 21:18:47 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:12.990 21:18:47 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:12.990 21:18:47 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:12.990 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.990 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.990 Malloc1 00:15:12.990 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.990 21:18:47 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:12.990 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.990 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.990 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.990 21:18:47 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.990 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.990 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.990 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.990 21:18:47 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:12.990 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.990 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.990 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.990 21:18:47 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:12.990 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.990 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.990 [2024-07-26 21:18:47.718093] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:12.990 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.990 21:18:47 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:12.990 21:18:47 -- common/autotest_common.sh@640 -- # local es=0 00:15:12.990 21:18:47 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:12.990 21:18:47 -- common/autotest_common.sh@628 -- # local arg=nvme 00:15:12.990 21:18:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.990 21:18:47 -- common/autotest_common.sh@632 -- # type -t nvme 00:15:12.990 21:18:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.990 21:18:47 -- common/autotest_common.sh@634 -- # type -P nvme 00:15:12.990 21:18:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.990 21:18:47 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:15:12.990 21:18:47 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:15:12.990 21:18:47 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:12.990 [2024-07-26 21:18:47.770236] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:15:12.990 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:12.990 could not add new controller: failed to write to nvme-fabrics device 00:15:12.990 21:18:47 -- common/autotest_common.sh@643 -- # es=1 00:15:12.990 21:18:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:12.990 21:18:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:12.990 21:18:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:12.990 21:18:47 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:12.990 21:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.990 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:15:12.990 21:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.990 21:18:47 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:13.928 21:18:48 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.928 21:18:48 -- common/autotest_common.sh@1177 -- # local i=0 00:15:13.928 21:18:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.928 21:18:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:13.928 21:18:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:16.461 21:18:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:16.461 21:18:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:16.461 21:18:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.461 21:18:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:16.461 21:18:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.461 21:18:50 -- common/autotest_common.sh@1187 -- # return 0 00:15:16.461 21:18:50 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.030 21:18:51 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.030 21:18:51 -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.030 21:18:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:17.030 21:18:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.030 21:18:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:17.030 21:18:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.030 21:18:51 -- common/autotest_common.sh@1210 -- # return 0 00:15:17.030 21:18:51 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:17.030 21:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.030 21:18:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.030 21:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.030 21:18:51 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:17.030 21:18:51 -- common/autotest_common.sh@640 -- # local es=0 00:15:17.030 21:18:51 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:17.030 21:18:51 -- common/autotest_common.sh@628 -- # local arg=nvme 00:15:17.030 21:18:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:17.030 21:18:51 -- common/autotest_common.sh@632 -- # type -t nvme 00:15:17.030 21:18:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:17.030 21:18:51 -- common/autotest_common.sh@634 -- # type -P nvme 00:15:17.030 21:18:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:17.030 21:18:51 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:15:17.030 21:18:51 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:15:17.030 21:18:51 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:17.030 [2024-07-26 21:18:51.871972] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:15:17.289 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:17.289 could not add new controller: failed to write to nvme-fabrics device 00:15:17.289 21:18:51 -- common/autotest_common.sh@643 -- # es=1 00:15:17.289 21:18:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:17.289 21:18:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:17.289 21:18:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:17.289 21:18:51 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:17.289 21:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.289 21:18:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.289 21:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.289 21:18:51 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:18.224 21:18:52 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.224 21:18:52 -- common/autotest_common.sh@1177 -- # local i=0 00:15:18.224 21:18:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.224 21:18:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:18.224 21:18:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:20.126 21:18:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:20.126 21:18:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:20.126 21:18:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.126 21:18:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:20.126 21:18:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.126 21:18:54 -- common/autotest_common.sh@1187 -- # return 0 00:15:20.126 21:18:54 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.061 21:18:55 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:21.061 21:18:55 -- common/autotest_common.sh@1198 -- # local i=0 00:15:21.061 21:18:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:21.061 21:18:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.061 21:18:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:21.061 21:18:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.061 21:18:55 -- common/autotest_common.sh@1210 -- # return 0 00:15:21.061 21:18:55 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.061 21:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.061 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.061 21:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.061 21:18:55 -- target/rpc.sh@81 -- # seq 1 5 00:15:21.061 21:18:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:21.061 21:18:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.061 21:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.061 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.061 21:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.061 21:18:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:21.061 21:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.061 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.061 [2024-07-26 21:18:55.919723] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:21.061 21:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.061 21:18:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:21.061 21:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.061 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.320 21:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.320 21:18:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.320 21:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.320 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.320 21:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.320 21:18:55 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:22.257 21:18:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:22.257 21:18:56 -- common/autotest_common.sh@1177 -- # local i=0 00:15:22.257 21:18:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.257 21:18:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:22.257 21:18:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:24.160 21:18:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:24.160 21:18:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:24.160 21:18:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.160 21:18:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:24.160 21:18:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.160 21:18:58 -- common/autotest_common.sh@1187 -- # return 0 00:15:24.160 21:18:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.094 21:18:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.094 21:18:59 -- common/autotest_common.sh@1198 -- # local i=0 00:15:25.094 21:18:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:25.094 21:18:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.094 21:18:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:25.094 21:18:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.094 21:18:59 -- common/autotest_common.sh@1210 -- # return 0 00:15:25.094 21:18:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:25.094 21:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.094 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.094 21:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.094 21:18:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.094 21:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.094 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.094 21:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.094 21:18:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:25.094 21:18:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.094 21:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.094 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.094 21:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.094 21:18:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:25.094 21:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.094 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.094 [2024-07-26 21:18:59.940642] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:25.094 21:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.094 21:18:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:25.094 21:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.094 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.094 21:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.094 21:18:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.094 21:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.094 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.094 21:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.094 21:18:59 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:26.495 21:19:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:26.495 21:19:00 -- common/autotest_common.sh@1177 -- # local i=0 00:15:26.495 21:19:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.495 21:19:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:26.495 21:19:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:28.397 21:19:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:28.397 21:19:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:28.397 21:19:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.397 21:19:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:28.397 21:19:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.397 21:19:02 -- common/autotest_common.sh@1187 -- # return 0 00:15:28.397 21:19:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.330 21:19:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.330 21:19:03 -- common/autotest_common.sh@1198 -- # local i=0 00:15:29.330 21:19:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:29.330 21:19:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.330 21:19:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.330 21:19:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:29.330 21:19:03 -- common/autotest_common.sh@1210 -- # return 0 00:15:29.330 21:19:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.330 21:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.330 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 21:19:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.330 21:19:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.330 21:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.330 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 21:19:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.330 21:19:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:29.330 21:19:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.330 21:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.330 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 21:19:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.330 21:19:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:29.330 21:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.330 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 [2024-07-26 21:19:03.962174] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.330 21:19:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.330 21:19:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:29.330 21:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.330 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 21:19:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.330 21:19:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.330 21:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.330 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 21:19:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.330 21:19:03 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:30.265 21:19:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:30.265 21:19:04 -- common/autotest_common.sh@1177 -- # local i=0 00:15:30.265 21:19:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.265 21:19:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:30.265 21:19:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:32.167 21:19:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:32.167 21:19:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:32.167 21:19:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.167 21:19:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:32.167 21:19:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.167 21:19:06 -- common/autotest_common.sh@1187 -- # return 0 00:15:32.167 21:19:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.103 21:19:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.103 21:19:07 -- common/autotest_common.sh@1198 -- # local i=0 00:15:33.103 21:19:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:33.103 21:19:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.103 21:19:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:33.103 21:19:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.103 21:19:07 -- common/autotest_common.sh@1210 -- # return 0 00:15:33.103 21:19:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:33.103 21:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.103 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.362 21:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.362 21:19:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.362 21:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.362 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.362 21:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.362 21:19:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:33.362 21:19:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:33.362 21:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.362 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.362 21:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.362 21:19:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:33.362 21:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.362 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.362 [2024-07-26 21:19:07.995289] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:33.362 21:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.362 21:19:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:33.362 21:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.362 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:15:33.362 21:19:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.362 21:19:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:33.362 21:19:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.362 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:33.362 21:19:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.362 21:19:08 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:34.299 21:19:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.299 21:19:08 -- common/autotest_common.sh@1177 -- # local i=0 00:15:34.299 21:19:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.299 21:19:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:34.299 21:19:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:36.204 21:19:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:36.204 21:19:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:36.204 21:19:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.204 21:19:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:36.204 21:19:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.204 21:19:11 -- common/autotest_common.sh@1187 -- # return 0 00:15:36.204 21:19:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.141 21:19:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.141 21:19:11 -- common/autotest_common.sh@1198 -- # local i=0 00:15:37.141 21:19:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:37.141 21:19:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.141 21:19:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:37.141 21:19:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.141 21:19:11 -- common/autotest_common.sh@1210 -- # return 0 00:15:37.141 21:19:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:37.141 21:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.141 21:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:37.141 21:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.141 21:19:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.141 21:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.141 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:37.141 21:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.141 21:19:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:37.141 21:19:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.141 21:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.401 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:37.401 21:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.401 21:19:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:37.401 21:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.401 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:37.401 [2024-07-26 21:19:12.022582] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:37.401 21:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.401 21:19:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:37.401 21:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.401 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:37.401 21:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.401 21:19:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.401 21:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.401 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:37.401 21:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.401 21:19:12 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:38.338 21:19:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.338 21:19:13 -- common/autotest_common.sh@1177 -- # local i=0 00:15:38.338 21:19:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.338 21:19:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:38.338 21:19:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:40.246 21:19:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:40.246 21:19:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:40.246 21:19:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.246 21:19:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:40.246 21:19:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.246 21:19:15 -- common/autotest_common.sh@1187 -- # return 0 00:15:40.246 21:19:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.183 21:19:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:41.183 21:19:15 -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.184 21:19:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:41.184 21:19:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.184 21:19:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:41.184 21:19:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.184 21:19:16 -- common/autotest_common.sh@1210 -- # return 0 00:15:41.184 21:19:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:41.184 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.184 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.184 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.184 21:19:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.184 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.184 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.184 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.184 21:19:16 -- target/rpc.sh@99 -- # seq 1 5 00:15:41.184 21:19:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:41.184 21:19:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:41.184 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.184 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.184 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.184 21:19:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:41.184 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.184 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 [2024-07-26 21:19:16.060665] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:41.444 21:19:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 [2024-07-26 21:19:16.108829] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:41.444 21:19:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 [2024-07-26 21:19:16.156984] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:41.444 21:19:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 [2024-07-26 21:19:16.209175] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:41.444 21:19:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 [2024-07-26 21:19:16.257333] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.444 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.444 21:19:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.444 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.445 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.445 21:19:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.445 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.445 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.445 21:19:16 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:41.445 21:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.445 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 21:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.704 21:19:16 -- target/rpc.sh@110 -- # stats='{ 00:15:41.704 "tick_rate": 2500000000, 00:15:41.704 "poll_groups": [ 00:15:41.704 { 00:15:41.704 "name": "nvmf_tgt_poll_group_0", 00:15:41.704 "admin_qpairs": 2, 00:15:41.704 "io_qpairs": 27, 00:15:41.704 "current_admin_qpairs": 0, 00:15:41.704 "current_io_qpairs": 0, 00:15:41.704 "pending_bdev_io": 0, 00:15:41.704 "completed_nvme_io": 78, 00:15:41.704 "transports": [ 00:15:41.704 { 00:15:41.704 "trtype": "RDMA", 00:15:41.704 "pending_data_buffer": 0, 00:15:41.704 "devices": [ 00:15:41.704 { 00:15:41.704 "name": "mlx5_0", 00:15:41.704 "polls": 3420430, 00:15:41.704 "idle_polls": 3420183, 00:15:41.704 "completions": 269, 00:15:41.704 "requests": 134, 00:15:41.704 "request_latency": 21869900, 00:15:41.704 "pending_free_request": 0, 00:15:41.704 "pending_rdma_read": 0, 00:15:41.704 "pending_rdma_write": 0, 00:15:41.704 "pending_rdma_send": 0, 00:15:41.704 "total_send_wrs": 212, 00:15:41.704 "send_doorbell_updates": 123, 00:15:41.704 "total_recv_wrs": 4230, 00:15:41.704 "recv_doorbell_updates": 123 00:15:41.704 }, 00:15:41.704 { 00:15:41.704 "name": "mlx5_1", 00:15:41.704 "polls": 3420430, 00:15:41.704 "idle_polls": 3420430, 00:15:41.704 "completions": 0, 00:15:41.704 "requests": 0, 00:15:41.704 "request_latency": 0, 00:15:41.704 "pending_free_request": 0, 00:15:41.704 "pending_rdma_read": 0, 00:15:41.704 "pending_rdma_write": 0, 00:15:41.704 "pending_rdma_send": 0, 00:15:41.704 "total_send_wrs": 0, 00:15:41.704 "send_doorbell_updates": 0, 00:15:41.704 "total_recv_wrs": 4096, 00:15:41.704 "recv_doorbell_updates": 1 00:15:41.704 } 00:15:41.704 ] 00:15:41.704 } 00:15:41.704 ] 00:15:41.704 }, 00:15:41.704 { 00:15:41.704 "name": "nvmf_tgt_poll_group_1", 00:15:41.704 "admin_qpairs": 2, 00:15:41.704 "io_qpairs": 26, 00:15:41.704 "current_admin_qpairs": 0, 00:15:41.704 "current_io_qpairs": 0, 00:15:41.704 "pending_bdev_io": 0, 00:15:41.704 "completed_nvme_io": 126, 00:15:41.704 "transports": [ 00:15:41.704 { 00:15:41.704 "trtype": "RDMA", 00:15:41.704 "pending_data_buffer": 0, 00:15:41.704 "devices": [ 00:15:41.704 { 00:15:41.704 "name": "mlx5_0", 00:15:41.704 "polls": 3380228, 00:15:41.704 "idle_polls": 3379908, 00:15:41.704 "completions": 360, 00:15:41.704 "requests": 180, 00:15:41.704 "request_latency": 34257628, 00:15:41.704 "pending_free_request": 0, 00:15:41.704 "pending_rdma_read": 0, 00:15:41.704 "pending_rdma_write": 0, 00:15:41.704 "pending_rdma_send": 0, 00:15:41.704 "total_send_wrs": 305, 00:15:41.704 "send_doorbell_updates": 158, 00:15:41.704 "total_recv_wrs": 4276, 00:15:41.704 "recv_doorbell_updates": 159 00:15:41.704 }, 00:15:41.704 { 00:15:41.704 "name": "mlx5_1", 00:15:41.704 "polls": 3380228, 00:15:41.704 "idle_polls": 3380228, 00:15:41.704 "completions": 0, 00:15:41.704 "requests": 0, 00:15:41.704 "request_latency": 0, 00:15:41.704 "pending_free_request": 0, 00:15:41.704 "pending_rdma_read": 0, 00:15:41.704 "pending_rdma_write": 0, 00:15:41.704 "pending_rdma_send": 0, 00:15:41.704 "total_send_wrs": 0, 00:15:41.704 "send_doorbell_updates": 0, 00:15:41.704 "total_recv_wrs": 4096, 00:15:41.704 "recv_doorbell_updates": 1 00:15:41.704 } 00:15:41.704 ] 00:15:41.704 } 00:15:41.704 ] 00:15:41.704 }, 00:15:41.704 { 00:15:41.704 "name": "nvmf_tgt_poll_group_2", 00:15:41.704 "admin_qpairs": 1, 00:15:41.704 "io_qpairs": 26, 00:15:41.704 "current_admin_qpairs": 0, 00:15:41.704 "current_io_qpairs": 0, 00:15:41.704 "pending_bdev_io": 0, 00:15:41.704 "completed_nvme_io": 174, 00:15:41.704 "transports": [ 00:15:41.704 { 00:15:41.704 "trtype": "RDMA", 00:15:41.704 "pending_data_buffer": 0, 00:15:41.704 "devices": [ 00:15:41.704 { 00:15:41.704 "name": "mlx5_0", 00:15:41.704 "polls": 3413463, 00:15:41.704 "idle_polls": 3413119, 00:15:41.704 "completions": 405, 00:15:41.704 "requests": 202, 00:15:41.704 "request_latency": 45747336, 00:15:41.704 "pending_free_request": 0, 00:15:41.704 "pending_rdma_read": 0, 00:15:41.704 "pending_rdma_write": 0, 00:15:41.704 "pending_rdma_send": 0, 00:15:41.704 "total_send_wrs": 364, 00:15:41.704 "send_doorbell_updates": 170, 00:15:41.704 "total_recv_wrs": 4298, 00:15:41.704 "recv_doorbell_updates": 170 00:15:41.704 }, 00:15:41.704 { 00:15:41.704 "name": "mlx5_1", 00:15:41.704 "polls": 3413463, 00:15:41.704 "idle_polls": 3413463, 00:15:41.704 "completions": 0, 00:15:41.704 "requests": 0, 00:15:41.704 "request_latency": 0, 00:15:41.704 "pending_free_request": 0, 00:15:41.705 "pending_rdma_read": 0, 00:15:41.705 "pending_rdma_write": 0, 00:15:41.705 "pending_rdma_send": 0, 00:15:41.705 "total_send_wrs": 0, 00:15:41.705 "send_doorbell_updates": 0, 00:15:41.705 "total_recv_wrs": 4096, 00:15:41.705 "recv_doorbell_updates": 1 00:15:41.705 } 00:15:41.705 ] 00:15:41.705 } 00:15:41.705 ] 00:15:41.705 }, 00:15:41.705 { 00:15:41.705 "name": "nvmf_tgt_poll_group_3", 00:15:41.705 "admin_qpairs": 2, 00:15:41.705 "io_qpairs": 26, 00:15:41.705 "current_admin_qpairs": 0, 00:15:41.705 "current_io_qpairs": 0, 00:15:41.705 "pending_bdev_io": 0, 00:15:41.705 "completed_nvme_io": 77, 00:15:41.705 "transports": [ 00:15:41.705 { 00:15:41.705 "trtype": "RDMA", 00:15:41.705 "pending_data_buffer": 0, 00:15:41.705 "devices": [ 00:15:41.705 { 00:15:41.705 "name": "mlx5_0", 00:15:41.705 "polls": 2663653, 00:15:41.705 "idle_polls": 2663416, 00:15:41.705 "completions": 258, 00:15:41.705 "requests": 129, 00:15:41.705 "request_latency": 21642628, 00:15:41.705 "pending_free_request": 0, 00:15:41.705 "pending_rdma_read": 0, 00:15:41.705 "pending_rdma_write": 0, 00:15:41.705 "pending_rdma_send": 0, 00:15:41.705 "total_send_wrs": 204, 00:15:41.705 "send_doorbell_updates": 118, 00:15:41.705 "total_recv_wrs": 4225, 00:15:41.705 "recv_doorbell_updates": 119 00:15:41.705 }, 00:15:41.705 { 00:15:41.705 "name": "mlx5_1", 00:15:41.705 "polls": 2663653, 00:15:41.705 "idle_polls": 2663653, 00:15:41.705 "completions": 0, 00:15:41.705 "requests": 0, 00:15:41.705 "request_latency": 0, 00:15:41.705 "pending_free_request": 0, 00:15:41.705 "pending_rdma_read": 0, 00:15:41.705 "pending_rdma_write": 0, 00:15:41.705 "pending_rdma_send": 0, 00:15:41.705 "total_send_wrs": 0, 00:15:41.705 "send_doorbell_updates": 0, 00:15:41.705 "total_recv_wrs": 4096, 00:15:41.705 "recv_doorbell_updates": 1 00:15:41.705 } 00:15:41.705 ] 00:15:41.705 } 00:15:41.705 ] 00:15:41.705 } 00:15:41.705 ] 00:15:41.705 }' 00:15:41.705 21:19:16 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:41.705 21:19:16 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:41.705 21:19:16 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:41.705 21:19:16 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:41.705 21:19:16 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:41.705 21:19:16 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:41.705 21:19:16 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:41.705 21:19:16 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:41.705 21:19:16 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:41.705 21:19:16 -- target/rpc.sh@117 -- # (( 1292 > 0 )) 00:15:41.705 21:19:16 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:41.705 21:19:16 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:41.705 21:19:16 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:41.705 21:19:16 -- target/rpc.sh@118 -- # (( 123517492 > 0 )) 00:15:41.705 21:19:16 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:41.705 21:19:16 -- target/rpc.sh@123 -- # nvmftestfini 00:15:41.705 21:19:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:41.705 21:19:16 -- nvmf/common.sh@116 -- # sync 00:15:41.705 21:19:16 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:41.705 21:19:16 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:41.705 21:19:16 -- nvmf/common.sh@119 -- # set +e 00:15:41.705 21:19:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:41.705 21:19:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:41.705 rmmod nvme_rdma 00:15:41.705 rmmod nvme_fabrics 00:15:41.705 21:19:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:41.705 21:19:16 -- nvmf/common.sh@123 -- # set -e 00:15:41.705 21:19:16 -- nvmf/common.sh@124 -- # return 0 00:15:41.705 21:19:16 -- nvmf/common.sh@477 -- # '[' -n 1620210 ']' 00:15:41.705 21:19:16 -- nvmf/common.sh@478 -- # killprocess 1620210 00:15:41.705 21:19:16 -- common/autotest_common.sh@926 -- # '[' -z 1620210 ']' 00:15:41.705 21:19:16 -- common/autotest_common.sh@930 -- # kill -0 1620210 00:15:41.705 21:19:16 -- common/autotest_common.sh@931 -- # uname 00:15:41.965 21:19:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:41.965 21:19:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1620210 00:15:41.965 21:19:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:41.965 21:19:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:41.965 21:19:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1620210' 00:15:41.965 killing process with pid 1620210 00:15:41.965 21:19:16 -- common/autotest_common.sh@945 -- # kill 1620210 00:15:41.965 21:19:16 -- common/autotest_common.sh@950 -- # wait 1620210 00:15:42.225 21:19:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:42.225 21:19:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:42.225 00:15:42.225 real 0m38.956s 00:15:42.225 user 2m4.020s 00:15:42.225 sys 0m7.917s 00:15:42.225 21:19:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.225 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:42.225 ************************************ 00:15:42.225 END TEST nvmf_rpc 00:15:42.225 ************************************ 00:15:42.225 21:19:16 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:42.225 21:19:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:42.225 21:19:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:42.225 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:42.225 ************************************ 00:15:42.225 START TEST nvmf_invalid 00:15:42.225 ************************************ 00:15:42.225 21:19:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:42.225 * Looking for test storage... 00:15:42.225 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:42.225 21:19:17 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.225 21:19:17 -- nvmf/common.sh@7 -- # uname -s 00:15:42.225 21:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.225 21:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.225 21:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.225 21:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.225 21:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.225 21:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.225 21:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.225 21:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.225 21:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.225 21:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.225 21:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:42.225 21:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:42.225 21:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.225 21:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.225 21:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.225 21:19:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:42.225 21:19:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.225 21:19:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.225 21:19:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.225 21:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.225 21:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.225 21:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.225 21:19:17 -- paths/export.sh@5 -- # export PATH 00:15:42.225 21:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.225 21:19:17 -- nvmf/common.sh@46 -- # : 0 00:15:42.225 21:19:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:42.225 21:19:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:42.225 21:19:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:42.225 21:19:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.225 21:19:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.225 21:19:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:42.225 21:19:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:42.225 21:19:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:42.225 21:19:17 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:42.225 21:19:17 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:42.225 21:19:17 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:42.225 21:19:17 -- target/invalid.sh@14 -- # target=foobar 00:15:42.225 21:19:17 -- target/invalid.sh@16 -- # RANDOM=0 00:15:42.225 21:19:17 -- target/invalid.sh@34 -- # nvmftestinit 00:15:42.225 21:19:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:42.225 21:19:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.225 21:19:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:42.225 21:19:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:42.225 21:19:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:42.225 21:19:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.225 21:19:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.225 21:19:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.225 21:19:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:42.225 21:19:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:42.225 21:19:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:42.225 21:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:52.228 21:19:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:52.228 21:19:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:52.228 21:19:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:52.228 21:19:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:52.228 21:19:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:52.228 21:19:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:52.228 21:19:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:52.228 21:19:25 -- nvmf/common.sh@294 -- # net_devs=() 00:15:52.228 21:19:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:52.228 21:19:25 -- nvmf/common.sh@295 -- # e810=() 00:15:52.228 21:19:25 -- nvmf/common.sh@295 -- # local -ga e810 00:15:52.228 21:19:25 -- nvmf/common.sh@296 -- # x722=() 00:15:52.228 21:19:25 -- nvmf/common.sh@296 -- # local -ga x722 00:15:52.228 21:19:25 -- nvmf/common.sh@297 -- # mlx=() 00:15:52.228 21:19:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:52.228 21:19:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.228 21:19:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:52.228 21:19:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:52.228 21:19:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:52.228 21:19:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:52.228 21:19:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:52.228 21:19:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:52.228 21:19:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:52.228 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:52.228 21:19:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:52.228 21:19:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:52.228 21:19:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:52.228 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:52.228 21:19:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:52.228 21:19:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:52.228 21:19:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:52.228 21:19:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.228 21:19:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:52.228 21:19:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.228 21:19:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:52.228 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:52.228 21:19:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.228 21:19:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:52.228 21:19:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.228 21:19:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:52.228 21:19:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.228 21:19:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:52.228 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:52.228 21:19:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.228 21:19:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:52.228 21:19:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:52.228 21:19:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:52.228 21:19:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:52.228 21:19:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:52.228 21:19:25 -- nvmf/common.sh@57 -- # uname 00:15:52.228 21:19:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:52.228 21:19:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:52.228 21:19:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:52.228 21:19:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:52.228 21:19:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:52.228 21:19:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:52.228 21:19:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:52.229 21:19:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:52.229 21:19:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:52.229 21:19:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:52.229 21:19:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:52.229 21:19:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:52.229 21:19:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:52.229 21:19:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:52.229 21:19:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:52.229 21:19:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:52.229 21:19:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@104 -- # continue 2 00:15:52.229 21:19:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@104 -- # continue 2 00:15:52.229 21:19:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:52.229 21:19:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:52.229 21:19:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:52.229 21:19:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:52.229 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:52.229 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:52.229 altname enp217s0f0np0 00:15:52.229 altname ens818f0np0 00:15:52.229 inet 192.168.100.8/24 scope global mlx_0_0 00:15:52.229 valid_lft forever preferred_lft forever 00:15:52.229 21:19:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:52.229 21:19:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:52.229 21:19:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:52.229 21:19:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:52.229 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:52.229 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:52.229 altname enp217s0f1np1 00:15:52.229 altname ens818f1np1 00:15:52.229 inet 192.168.100.9/24 scope global mlx_0_1 00:15:52.229 valid_lft forever preferred_lft forever 00:15:52.229 21:19:25 -- nvmf/common.sh@410 -- # return 0 00:15:52.229 21:19:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.229 21:19:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:52.229 21:19:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:52.229 21:19:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:52.229 21:19:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:52.229 21:19:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:52.229 21:19:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:52.229 21:19:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:52.229 21:19:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:52.229 21:19:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@104 -- # continue 2 00:15:52.229 21:19:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.229 21:19:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:52.229 21:19:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@104 -- # continue 2 00:15:52.229 21:19:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:52.229 21:19:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:52.229 21:19:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:52.229 21:19:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:52.229 21:19:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:52.229 21:19:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:52.229 192.168.100.9' 00:15:52.229 21:19:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:52.229 192.168.100.9' 00:15:52.229 21:19:25 -- nvmf/common.sh@445 -- # head -n 1 00:15:52.229 21:19:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:52.229 21:19:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:52.229 192.168.100.9' 00:15:52.229 21:19:25 -- nvmf/common.sh@446 -- # tail -n +2 00:15:52.229 21:19:25 -- nvmf/common.sh@446 -- # head -n 1 00:15:52.229 21:19:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:52.229 21:19:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:52.229 21:19:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:52.229 21:19:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:52.229 21:19:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:52.229 21:19:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:52.229 21:19:25 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:52.229 21:19:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.229 21:19:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:52.229 21:19:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 21:19:25 -- nvmf/common.sh@469 -- # nvmfpid=1629722 00:15:52.229 21:19:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.229 21:19:25 -- nvmf/common.sh@470 -- # waitforlisten 1629722 00:15:52.229 21:19:25 -- common/autotest_common.sh@819 -- # '[' -z 1629722 ']' 00:15:52.229 21:19:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.229 21:19:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:52.229 21:19:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.229 21:19:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:52.229 21:19:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 [2024-07-26 21:19:25.601527] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:52.229 [2024-07-26 21:19:25.601573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.229 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.229 [2024-07-26 21:19:25.686326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.229 [2024-07-26 21:19:25.724111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:52.229 [2024-07-26 21:19:25.724223] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.229 [2024-07-26 21:19:25.724233] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.229 [2024-07-26 21:19:25.724242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.229 [2024-07-26 21:19:25.724292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.229 [2024-07-26 21:19:25.724315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.229 [2024-07-26 21:19:25.724405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.229 [2024-07-26 21:19:25.724407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.229 21:19:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:52.229 21:19:26 -- common/autotest_common.sh@852 -- # return 0 00:15:52.229 21:19:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:52.229 21:19:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:52.229 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 21:19:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.229 21:19:26 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:52.230 21:19:26 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20227 00:15:52.230 [2024-07-26 21:19:26.599454] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:52.230 21:19:26 -- target/invalid.sh@40 -- # out='request: 00:15:52.230 { 00:15:52.230 "nqn": "nqn.2016-06.io.spdk:cnode20227", 00:15:52.230 "tgt_name": "foobar", 00:15:52.230 "method": "nvmf_create_subsystem", 00:15:52.230 "req_id": 1 00:15:52.230 } 00:15:52.230 Got JSON-RPC error response 00:15:52.230 response: 00:15:52.230 { 00:15:52.230 "code": -32603, 00:15:52.230 "message": "Unable to find target foobar" 00:15:52.230 }' 00:15:52.230 21:19:26 -- target/invalid.sh@41 -- # [[ request: 00:15:52.230 { 00:15:52.230 "nqn": "nqn.2016-06.io.spdk:cnode20227", 00:15:52.230 "tgt_name": "foobar", 00:15:52.230 "method": "nvmf_create_subsystem", 00:15:52.230 "req_id": 1 00:15:52.230 } 00:15:52.230 Got JSON-RPC error response 00:15:52.230 response: 00:15:52.230 { 00:15:52.230 "code": -32603, 00:15:52.230 "message": "Unable to find target foobar" 00:15:52.230 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:52.230 21:19:26 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:52.230 21:19:26 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18182 00:15:52.230 [2024-07-26 21:19:26.792124] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18182: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:52.230 21:19:26 -- target/invalid.sh@45 -- # out='request: 00:15:52.230 { 00:15:52.230 "nqn": "nqn.2016-06.io.spdk:cnode18182", 00:15:52.230 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:52.230 "method": "nvmf_create_subsystem", 00:15:52.230 "req_id": 1 00:15:52.230 } 00:15:52.230 Got JSON-RPC error response 00:15:52.230 response: 00:15:52.230 { 00:15:52.230 "code": -32602, 00:15:52.230 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:52.230 }' 00:15:52.230 21:19:26 -- target/invalid.sh@46 -- # [[ request: 00:15:52.230 { 00:15:52.230 "nqn": "nqn.2016-06.io.spdk:cnode18182", 00:15:52.230 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:52.230 "method": "nvmf_create_subsystem", 00:15:52.230 "req_id": 1 00:15:52.230 } 00:15:52.230 Got JSON-RPC error response 00:15:52.230 response: 00:15:52.230 { 00:15:52.230 "code": -32602, 00:15:52.230 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:52.230 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:52.230 21:19:26 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:52.230 21:19:26 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29638 00:15:52.230 [2024-07-26 21:19:26.976681] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29638: invalid model number 'SPDK_Controller' 00:15:52.230 21:19:27 -- target/invalid.sh@50 -- # out='request: 00:15:52.230 { 00:15:52.230 "nqn": "nqn.2016-06.io.spdk:cnode29638", 00:15:52.230 "model_number": "SPDK_Controller\u001f", 00:15:52.230 "method": "nvmf_create_subsystem", 00:15:52.230 "req_id": 1 00:15:52.230 } 00:15:52.230 Got JSON-RPC error response 00:15:52.230 response: 00:15:52.230 { 00:15:52.230 "code": -32602, 00:15:52.230 "message": "Invalid MN SPDK_Controller\u001f" 00:15:52.230 }' 00:15:52.230 21:19:27 -- target/invalid.sh@51 -- # [[ request: 00:15:52.230 { 00:15:52.230 "nqn": "nqn.2016-06.io.spdk:cnode29638", 00:15:52.230 "model_number": "SPDK_Controller\u001f", 00:15:52.230 "method": "nvmf_create_subsystem", 00:15:52.230 "req_id": 1 00:15:52.230 } 00:15:52.230 Got JSON-RPC error response 00:15:52.230 response: 00:15:52.230 { 00:15:52.230 "code": -32602, 00:15:52.230 "message": "Invalid MN SPDK_Controller\u001f" 00:15:52.230 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:52.230 21:19:27 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:52.230 21:19:27 -- target/invalid.sh@19 -- # local length=21 ll 00:15:52.230 21:19:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:52.230 21:19:27 -- target/invalid.sh@21 -- # local chars 00:15:52.230 21:19:27 -- target/invalid.sh@22 -- # local string 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 57 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=9 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 126 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+='~' 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 84 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=T 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 110 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=n 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 88 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=X 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 41 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=')' 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 43 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=+ 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 73 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=I 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 112 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=p 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 86 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # string+=V 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.230 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # printf %x 126 00:15:52.230 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # string+='~' 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # printf %x 107 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # string+=k 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # printf %x 112 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # string+=p 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # printf %x 88 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # string+=X 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # printf %x 39 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # string+=\' 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # printf %x 41 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # string+=')' 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # printf %x 66 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:52.490 21:19:27 -- target/invalid.sh@25 -- # string+=B 00:15:52.490 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # printf %x 125 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # string+='}' 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # printf %x 67 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # string+=C 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # printf %x 115 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # string+=s 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # printf %x 107 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:52.491 21:19:27 -- target/invalid.sh@25 -- # string+=k 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.491 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.491 21:19:27 -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:15:52.491 21:19:27 -- target/invalid.sh@31 -- # echo '9~TnX)+IpV~kpX'\'')B}Csk' 00:15:52.491 21:19:27 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '9~TnX)+IpV~kpX'\'')B}Csk' nqn.2016-06.io.spdk:cnode416 00:15:52.491 [2024-07-26 21:19:27.329916] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode416: invalid serial number '9~TnX)+IpV~kpX')B}Csk' 00:15:52.491 21:19:27 -- target/invalid.sh@54 -- # out='request: 00:15:52.491 { 00:15:52.491 "nqn": "nqn.2016-06.io.spdk:cnode416", 00:15:52.491 "serial_number": "9~TnX)+IpV~kpX'\'')B}Csk", 00:15:52.491 "method": "nvmf_create_subsystem", 00:15:52.491 "req_id": 1 00:15:52.491 } 00:15:52.491 Got JSON-RPC error response 00:15:52.491 response: 00:15:52.491 { 00:15:52.491 "code": -32602, 00:15:52.491 "message": "Invalid SN 9~TnX)+IpV~kpX'\'')B}Csk" 00:15:52.491 }' 00:15:52.491 21:19:27 -- target/invalid.sh@55 -- # [[ request: 00:15:52.491 { 00:15:52.491 "nqn": "nqn.2016-06.io.spdk:cnode416", 00:15:52.491 "serial_number": "9~TnX)+IpV~kpX')B}Csk", 00:15:52.491 "method": "nvmf_create_subsystem", 00:15:52.491 "req_id": 1 00:15:52.491 } 00:15:52.491 Got JSON-RPC error response 00:15:52.491 response: 00:15:52.491 { 00:15:52.491 "code": -32602, 00:15:52.491 "message": "Invalid SN 9~TnX)+IpV~kpX')B}Csk" 00:15:52.491 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:52.751 21:19:27 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:52.751 21:19:27 -- target/invalid.sh@19 -- # local length=41 ll 00:15:52.751 21:19:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:52.751 21:19:27 -- target/invalid.sh@21 -- # local chars 00:15:52.751 21:19:27 -- target/invalid.sh@22 -- # local string 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 76 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=L 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 72 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=H 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 55 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=7 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 119 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=w 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 104 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=h 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 52 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=4 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 33 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+='!' 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 53 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=5 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 107 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=k 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 95 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=_ 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 37 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=% 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 91 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+='[' 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 122 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=z 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 113 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=q 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 123 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+='{' 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 103 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=g 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 85 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=U 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 125 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+='}' 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 48 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+=0 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.751 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # printf %x 91 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:52.751 21:19:27 -- target/invalid.sh@25 -- # string+='[' 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 47 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=/ 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 57 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=9 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 90 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=Z 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 70 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=F 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 85 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=U 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 79 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=O 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 108 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=l 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 125 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+='}' 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 106 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=j 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 101 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=e 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 117 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=u 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 97 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=a 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 80 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=P 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # printf %x 78 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:52.752 21:19:27 -- target/invalid.sh@25 -- # string+=N 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.752 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # printf %x 49 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # string+=1 00:15:53.011 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:53.011 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # printf %x 91 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # string+='[' 00:15:53.011 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:53.011 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # printf %x 45 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:53.011 21:19:27 -- target/invalid.sh@25 -- # string+=- 00:15:53.011 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:53.011 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # printf %x 67 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # string+=C 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # printf %x 105 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # string+=i 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # printf %x 127 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # printf %x 78 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:53.012 21:19:27 -- target/invalid.sh@25 -- # string+=N 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:53.012 21:19:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:53.012 21:19:27 -- target/invalid.sh@28 -- # [[ L == \- ]] 00:15:53.012 21:19:27 -- target/invalid.sh@31 -- # echo 'LH7wh4!5k_%[zq{gU}0[/9ZFUOl}jeuaPN1[-CiN' 00:15:53.012 21:19:27 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'LH7wh4!5k_%[zq{gU}0[/9ZFUOl}jeuaPN1[-CiN' nqn.2016-06.io.spdk:cnode21997 00:15:53.012 [2024-07-26 21:19:27.827654] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21997: invalid model number 'LH7wh4!5k_%[zq{gU}0[/9ZFUOl}jeuaPN1[-CiN' 00:15:53.012 21:19:27 -- target/invalid.sh@58 -- # out='request: 00:15:53.012 { 00:15:53.012 "nqn": "nqn.2016-06.io.spdk:cnode21997", 00:15:53.012 "model_number": "LH7wh4!5k_%[zq{gU}0[/9ZFUOl}jeuaPN1[-Ci\u007fN", 00:15:53.012 "method": "nvmf_create_subsystem", 00:15:53.012 "req_id": 1 00:15:53.012 } 00:15:53.012 Got JSON-RPC error response 00:15:53.012 response: 00:15:53.012 { 00:15:53.012 "code": -32602, 00:15:53.012 "message": "Invalid MN LH7wh4!5k_%[zq{gU}0[/9ZFUOl}jeuaPN1[-Ci\u007fN" 00:15:53.012 }' 00:15:53.012 21:19:27 -- target/invalid.sh@59 -- # [[ request: 00:15:53.012 { 00:15:53.012 "nqn": "nqn.2016-06.io.spdk:cnode21997", 00:15:53.012 "model_number": "LH7wh4!5k_%[zq{gU}0[/9ZFUOl}jeuaPN1[-Ci\u007fN", 00:15:53.012 "method": "nvmf_create_subsystem", 00:15:53.012 "req_id": 1 00:15:53.012 } 00:15:53.012 Got JSON-RPC error response 00:15:53.012 response: 00:15:53.012 { 00:15:53.012 "code": -32602, 00:15:53.012 "message": "Invalid MN LH7wh4!5k_%[zq{gU}0[/9ZFUOl}jeuaPN1[-Ci\u007fN" 00:15:53.012 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:53.012 21:19:27 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:53.271 [2024-07-26 21:19:28.031148] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9d1950/0x9d5e40) succeed. 00:15:53.271 [2024-07-26 21:19:28.041506] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9d2f40/0xa174d0) succeed. 00:15:53.531 21:19:28 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:53.531 21:19:28 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:53.531 21:19:28 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:53.531 192.168.100.9' 00:15:53.531 21:19:28 -- target/invalid.sh@67 -- # head -n 1 00:15:53.531 21:19:28 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:53.531 21:19:28 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:53.790 [2024-07-26 21:19:28.531929] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:53.790 21:19:28 -- target/invalid.sh@69 -- # out='request: 00:15:53.790 { 00:15:53.790 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:53.790 "listen_address": { 00:15:53.790 "trtype": "rdma", 00:15:53.790 "traddr": "192.168.100.8", 00:15:53.790 "trsvcid": "4421" 00:15:53.790 }, 00:15:53.790 "method": "nvmf_subsystem_remove_listener", 00:15:53.790 "req_id": 1 00:15:53.790 } 00:15:53.790 Got JSON-RPC error response 00:15:53.790 response: 00:15:53.790 { 00:15:53.790 "code": -32602, 00:15:53.790 "message": "Invalid parameters" 00:15:53.790 }' 00:15:53.790 21:19:28 -- target/invalid.sh@70 -- # [[ request: 00:15:53.790 { 00:15:53.790 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:53.790 "listen_address": { 00:15:53.790 "trtype": "rdma", 00:15:53.790 "traddr": "192.168.100.8", 00:15:53.790 "trsvcid": "4421" 00:15:53.790 }, 00:15:53.790 "method": "nvmf_subsystem_remove_listener", 00:15:53.790 "req_id": 1 00:15:53.790 } 00:15:53.790 Got JSON-RPC error response 00:15:53.790 response: 00:15:53.790 { 00:15:53.790 "code": -32602, 00:15:53.790 "message": "Invalid parameters" 00:15:53.790 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:53.790 21:19:28 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9914 -i 0 00:15:54.050 [2024-07-26 21:19:28.712574] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9914: invalid cntlid range [0-65519] 00:15:54.050 21:19:28 -- target/invalid.sh@73 -- # out='request: 00:15:54.050 { 00:15:54.050 "nqn": "nqn.2016-06.io.spdk:cnode9914", 00:15:54.050 "min_cntlid": 0, 00:15:54.050 "method": "nvmf_create_subsystem", 00:15:54.050 "req_id": 1 00:15:54.050 } 00:15:54.050 Got JSON-RPC error response 00:15:54.050 response: 00:15:54.050 { 00:15:54.050 "code": -32602, 00:15:54.050 "message": "Invalid cntlid range [0-65519]" 00:15:54.050 }' 00:15:54.050 21:19:28 -- target/invalid.sh@74 -- # [[ request: 00:15:54.050 { 00:15:54.050 "nqn": "nqn.2016-06.io.spdk:cnode9914", 00:15:54.050 "min_cntlid": 0, 00:15:54.050 "method": "nvmf_create_subsystem", 00:15:54.050 "req_id": 1 00:15:54.050 } 00:15:54.050 Got JSON-RPC error response 00:15:54.050 response: 00:15:54.050 { 00:15:54.050 "code": -32602, 00:15:54.050 "message": "Invalid cntlid range [0-65519]" 00:15:54.050 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:54.050 21:19:28 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21111 -i 65520 00:15:54.050 [2024-07-26 21:19:28.897268] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21111: invalid cntlid range [65520-65519] 00:15:54.309 21:19:28 -- target/invalid.sh@75 -- # out='request: 00:15:54.309 { 00:15:54.309 "nqn": "nqn.2016-06.io.spdk:cnode21111", 00:15:54.309 "min_cntlid": 65520, 00:15:54.309 "method": "nvmf_create_subsystem", 00:15:54.309 "req_id": 1 00:15:54.309 } 00:15:54.309 Got JSON-RPC error response 00:15:54.309 response: 00:15:54.309 { 00:15:54.309 "code": -32602, 00:15:54.309 "message": "Invalid cntlid range [65520-65519]" 00:15:54.309 }' 00:15:54.309 21:19:28 -- target/invalid.sh@76 -- # [[ request: 00:15:54.309 { 00:15:54.309 "nqn": "nqn.2016-06.io.spdk:cnode21111", 00:15:54.309 "min_cntlid": 65520, 00:15:54.309 "method": "nvmf_create_subsystem", 00:15:54.309 "req_id": 1 00:15:54.309 } 00:15:54.309 Got JSON-RPC error response 00:15:54.309 response: 00:15:54.309 { 00:15:54.309 "code": -32602, 00:15:54.309 "message": "Invalid cntlid range [65520-65519]" 00:15:54.309 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:54.309 21:19:28 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12121 -I 0 00:15:54.309 [2024-07-26 21:19:29.098000] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12121: invalid cntlid range [1-0] 00:15:54.309 21:19:29 -- target/invalid.sh@77 -- # out='request: 00:15:54.309 { 00:15:54.309 "nqn": "nqn.2016-06.io.spdk:cnode12121", 00:15:54.309 "max_cntlid": 0, 00:15:54.309 "method": "nvmf_create_subsystem", 00:15:54.309 "req_id": 1 00:15:54.309 } 00:15:54.309 Got JSON-RPC error response 00:15:54.309 response: 00:15:54.309 { 00:15:54.309 "code": -32602, 00:15:54.309 "message": "Invalid cntlid range [1-0]" 00:15:54.309 }' 00:15:54.309 21:19:29 -- target/invalid.sh@78 -- # [[ request: 00:15:54.309 { 00:15:54.309 "nqn": "nqn.2016-06.io.spdk:cnode12121", 00:15:54.309 "max_cntlid": 0, 00:15:54.309 "method": "nvmf_create_subsystem", 00:15:54.309 "req_id": 1 00:15:54.309 } 00:15:54.309 Got JSON-RPC error response 00:15:54.309 response: 00:15:54.309 { 00:15:54.309 "code": -32602, 00:15:54.309 "message": "Invalid cntlid range [1-0]" 00:15:54.309 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:54.310 21:19:29 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4865 -I 65520 00:15:54.568 [2024-07-26 21:19:29.278643] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4865: invalid cntlid range [1-65520] 00:15:54.568 21:19:29 -- target/invalid.sh@79 -- # out='request: 00:15:54.568 { 00:15:54.568 "nqn": "nqn.2016-06.io.spdk:cnode4865", 00:15:54.568 "max_cntlid": 65520, 00:15:54.568 "method": "nvmf_create_subsystem", 00:15:54.568 "req_id": 1 00:15:54.568 } 00:15:54.568 Got JSON-RPC error response 00:15:54.568 response: 00:15:54.568 { 00:15:54.568 "code": -32602, 00:15:54.568 "message": "Invalid cntlid range [1-65520]" 00:15:54.568 }' 00:15:54.568 21:19:29 -- target/invalid.sh@80 -- # [[ request: 00:15:54.568 { 00:15:54.568 "nqn": "nqn.2016-06.io.spdk:cnode4865", 00:15:54.568 "max_cntlid": 65520, 00:15:54.568 "method": "nvmf_create_subsystem", 00:15:54.568 "req_id": 1 00:15:54.568 } 00:15:54.568 Got JSON-RPC error response 00:15:54.568 response: 00:15:54.568 { 00:15:54.568 "code": -32602, 00:15:54.568 "message": "Invalid cntlid range [1-65520]" 00:15:54.568 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:54.568 21:19:29 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7660 -i 6 -I 5 00:15:54.828 [2024-07-26 21:19:29.463323] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7660: invalid cntlid range [6-5] 00:15:54.828 21:19:29 -- target/invalid.sh@83 -- # out='request: 00:15:54.828 { 00:15:54.828 "nqn": "nqn.2016-06.io.spdk:cnode7660", 00:15:54.828 "min_cntlid": 6, 00:15:54.828 "max_cntlid": 5, 00:15:54.828 "method": "nvmf_create_subsystem", 00:15:54.828 "req_id": 1 00:15:54.828 } 00:15:54.828 Got JSON-RPC error response 00:15:54.828 response: 00:15:54.828 { 00:15:54.828 "code": -32602, 00:15:54.828 "message": "Invalid cntlid range [6-5]" 00:15:54.828 }' 00:15:54.828 21:19:29 -- target/invalid.sh@84 -- # [[ request: 00:15:54.828 { 00:15:54.828 "nqn": "nqn.2016-06.io.spdk:cnode7660", 00:15:54.828 "min_cntlid": 6, 00:15:54.828 "max_cntlid": 5, 00:15:54.828 "method": "nvmf_create_subsystem", 00:15:54.828 "req_id": 1 00:15:54.828 } 00:15:54.828 Got JSON-RPC error response 00:15:54.828 response: 00:15:54.828 { 00:15:54.828 "code": -32602, 00:15:54.828 "message": "Invalid cntlid range [6-5]" 00:15:54.828 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:54.828 21:19:29 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:54.828 21:19:29 -- target/invalid.sh@87 -- # out='request: 00:15:54.828 { 00:15:54.828 "name": "foobar", 00:15:54.828 "method": "nvmf_delete_target", 00:15:54.828 "req_id": 1 00:15:54.828 } 00:15:54.828 Got JSON-RPC error response 00:15:54.828 response: 00:15:54.828 { 00:15:54.828 "code": -32602, 00:15:54.828 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:54.828 }' 00:15:54.828 21:19:29 -- target/invalid.sh@88 -- # [[ request: 00:15:54.828 { 00:15:54.828 "name": "foobar", 00:15:54.828 "method": "nvmf_delete_target", 00:15:54.828 "req_id": 1 00:15:54.828 } 00:15:54.828 Got JSON-RPC error response 00:15:54.828 response: 00:15:54.828 { 00:15:54.828 "code": -32602, 00:15:54.828 "message": "The specified target doesn't exist, cannot delete it." 00:15:54.828 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:54.828 21:19:29 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:54.828 21:19:29 -- target/invalid.sh@91 -- # nvmftestfini 00:15:54.828 21:19:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:54.828 21:19:29 -- nvmf/common.sh@116 -- # sync 00:15:54.828 21:19:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:54.828 21:19:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:54.828 21:19:29 -- nvmf/common.sh@119 -- # set +e 00:15:54.828 21:19:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:54.828 21:19:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:54.828 rmmod nvme_rdma 00:15:54.828 rmmod nvme_fabrics 00:15:54.828 21:19:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:54.828 21:19:29 -- nvmf/common.sh@123 -- # set -e 00:15:54.828 21:19:29 -- nvmf/common.sh@124 -- # return 0 00:15:54.828 21:19:29 -- nvmf/common.sh@477 -- # '[' -n 1629722 ']' 00:15:54.828 21:19:29 -- nvmf/common.sh@478 -- # killprocess 1629722 00:15:54.828 21:19:29 -- common/autotest_common.sh@926 -- # '[' -z 1629722 ']' 00:15:54.828 21:19:29 -- common/autotest_common.sh@930 -- # kill -0 1629722 00:15:54.828 21:19:29 -- common/autotest_common.sh@931 -- # uname 00:15:54.828 21:19:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:54.828 21:19:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1629722 00:15:55.087 21:19:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:55.087 21:19:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:55.087 21:19:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1629722' 00:15:55.087 killing process with pid 1629722 00:15:55.087 21:19:29 -- common/autotest_common.sh@945 -- # kill 1629722 00:15:55.087 21:19:29 -- common/autotest_common.sh@950 -- # wait 1629722 00:15:55.087 21:19:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:55.347 21:19:29 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:55.347 00:15:55.347 real 0m13.022s 00:15:55.347 user 0m21.287s 00:15:55.347 sys 0m7.705s 00:15:55.347 21:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.347 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:15:55.347 ************************************ 00:15:55.347 END TEST nvmf_invalid 00:15:55.347 ************************************ 00:15:55.347 21:19:29 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:55.347 21:19:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:55.347 21:19:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.347 21:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:55.347 ************************************ 00:15:55.347 START TEST nvmf_abort 00:15:55.347 ************************************ 00:15:55.347 21:19:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:55.347 * Looking for test storage... 00:15:55.347 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:55.347 21:19:30 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.347 21:19:30 -- nvmf/common.sh@7 -- # uname -s 00:15:55.347 21:19:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.347 21:19:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.347 21:19:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.347 21:19:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.347 21:19:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.347 21:19:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.347 21:19:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.347 21:19:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.347 21:19:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.347 21:19:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.347 21:19:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:55.347 21:19:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:55.347 21:19:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.347 21:19:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.347 21:19:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.347 21:19:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:55.347 21:19:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.347 21:19:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.347 21:19:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.348 21:19:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.348 21:19:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.348 21:19:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.348 21:19:30 -- paths/export.sh@5 -- # export PATH 00:15:55.348 21:19:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.348 21:19:30 -- nvmf/common.sh@46 -- # : 0 00:15:55.348 21:19:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:55.348 21:19:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:55.348 21:19:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:55.348 21:19:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.348 21:19:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.348 21:19:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:55.348 21:19:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:55.348 21:19:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:55.348 21:19:30 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.348 21:19:30 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:55.348 21:19:30 -- target/abort.sh@14 -- # nvmftestinit 00:15:55.348 21:19:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:55.348 21:19:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.348 21:19:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:55.348 21:19:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:55.348 21:19:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:55.348 21:19:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.348 21:19:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.348 21:19:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.348 21:19:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:55.348 21:19:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:55.348 21:19:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:55.348 21:19:30 -- common/autotest_common.sh@10 -- # set +x 00:16:03.470 21:19:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:03.470 21:19:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:03.470 21:19:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:03.470 21:19:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:03.470 21:19:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:03.470 21:19:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:03.470 21:19:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:03.470 21:19:37 -- nvmf/common.sh@294 -- # net_devs=() 00:16:03.470 21:19:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:03.470 21:19:37 -- nvmf/common.sh@295 -- # e810=() 00:16:03.470 21:19:37 -- nvmf/common.sh@295 -- # local -ga e810 00:16:03.470 21:19:37 -- nvmf/common.sh@296 -- # x722=() 00:16:03.470 21:19:37 -- nvmf/common.sh@296 -- # local -ga x722 00:16:03.470 21:19:37 -- nvmf/common.sh@297 -- # mlx=() 00:16:03.470 21:19:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:03.470 21:19:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.470 21:19:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:03.470 21:19:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:03.470 21:19:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:03.470 21:19:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:03.470 21:19:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:03.470 21:19:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.470 21:19:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:03.470 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:03.470 21:19:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:03.470 21:19:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.470 21:19:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:03.470 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:03.470 21:19:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:03.470 21:19:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:03.470 21:19:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:03.470 21:19:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.470 21:19:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.470 21:19:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.470 21:19:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.470 21:19:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:03.470 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:03.470 21:19:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.471 21:19:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.471 21:19:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.471 21:19:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.471 21:19:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:03.471 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.471 21:19:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:03.471 21:19:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:03.471 21:19:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:03.471 21:19:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:03.471 21:19:37 -- nvmf/common.sh@57 -- # uname 00:16:03.471 21:19:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:03.471 21:19:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:03.471 21:19:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:03.471 21:19:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:03.471 21:19:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:03.471 21:19:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:03.471 21:19:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:03.471 21:19:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:03.471 21:19:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:03.471 21:19:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:03.471 21:19:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:03.471 21:19:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:03.471 21:19:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:03.471 21:19:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:03.471 21:19:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:03.471 21:19:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:03.471 21:19:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@104 -- # continue 2 00:16:03.471 21:19:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@104 -- # continue 2 00:16:03.471 21:19:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:03.471 21:19:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:03.471 21:19:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:03.471 21:19:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:03.471 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:03.471 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:03.471 altname enp217s0f0np0 00:16:03.471 altname ens818f0np0 00:16:03.471 inet 192.168.100.8/24 scope global mlx_0_0 00:16:03.471 valid_lft forever preferred_lft forever 00:16:03.471 21:19:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:03.471 21:19:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:03.471 21:19:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:03.471 21:19:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:03.471 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:03.471 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:03.471 altname enp217s0f1np1 00:16:03.471 altname ens818f1np1 00:16:03.471 inet 192.168.100.9/24 scope global mlx_0_1 00:16:03.471 valid_lft forever preferred_lft forever 00:16:03.471 21:19:37 -- nvmf/common.sh@410 -- # return 0 00:16:03.471 21:19:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:03.471 21:19:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:03.471 21:19:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:03.471 21:19:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:03.471 21:19:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:03.471 21:19:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:03.471 21:19:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:03.471 21:19:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:03.471 21:19:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:03.471 21:19:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@104 -- # continue 2 00:16:03.471 21:19:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:03.471 21:19:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:03.471 21:19:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@104 -- # continue 2 00:16:03.471 21:19:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:03.471 21:19:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:03.471 21:19:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:03.471 21:19:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:03.471 21:19:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:03.471 21:19:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:03.471 192.168.100.9' 00:16:03.471 21:19:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:03.471 192.168.100.9' 00:16:03.471 21:19:37 -- nvmf/common.sh@445 -- # head -n 1 00:16:03.471 21:19:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:03.471 21:19:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:03.471 192.168.100.9' 00:16:03.471 21:19:37 -- nvmf/common.sh@446 -- # tail -n +2 00:16:03.471 21:19:37 -- nvmf/common.sh@446 -- # head -n 1 00:16:03.471 21:19:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:03.471 21:19:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:03.471 21:19:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:03.471 21:19:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:03.471 21:19:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:03.471 21:19:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:03.471 21:19:37 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:03.471 21:19:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:03.471 21:19:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:03.471 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:16:03.471 21:19:37 -- nvmf/common.sh@469 -- # nvmfpid=1634599 00:16:03.471 21:19:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:03.471 21:19:37 -- nvmf/common.sh@470 -- # waitforlisten 1634599 00:16:03.471 21:19:37 -- common/autotest_common.sh@819 -- # '[' -z 1634599 ']' 00:16:03.472 21:19:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.472 21:19:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.472 21:19:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.472 21:19:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.472 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:16:03.472 [2024-07-26 21:19:37.744354] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:03.472 [2024-07-26 21:19:37.744403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.472 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.472 [2024-07-26 21:19:37.829565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.472 [2024-07-26 21:19:37.867325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.472 [2024-07-26 21:19:37.867434] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.472 [2024-07-26 21:19:37.867443] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.472 [2024-07-26 21:19:37.867452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.472 [2024-07-26 21:19:37.867556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.472 [2024-07-26 21:19:37.867644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.472 [2024-07-26 21:19:37.867647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.731 21:19:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:03.731 21:19:38 -- common/autotest_common.sh@852 -- # return 0 00:16:03.731 21:19:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:03.731 21:19:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:03.731 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.731 21:19:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.731 21:19:38 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:16:03.731 21:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.731 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 [2024-07-26 21:19:38.619983] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b836c0/0x1b87bb0) succeed. 00:16:03.990 [2024-07-26 21:19:38.629914] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b84c10/0x1bc9240) succeed. 00:16:03.990 21:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.990 21:19:38 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:03.990 21:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.990 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 Malloc0 00:16:03.990 21:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.990 21:19:38 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:03.990 21:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.990 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 Delay0 00:16:03.990 21:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.990 21:19:38 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:03.990 21:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.990 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 21:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.990 21:19:38 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:03.990 21:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.990 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 21:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.990 21:19:38 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:03.990 21:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.990 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 [2024-07-26 21:19:38.781790] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:03.990 21:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.990 21:19:38 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:03.990 21:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.990 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 21:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.990 21:19:38 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:03.990 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.990 [2024-07-26 21:19:38.857230] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:06.527 Initializing NVMe Controllers 00:16:06.527 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:16:06.527 controller IO queue size 128 less than required 00:16:06.527 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:06.527 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:06.527 Initialization complete. Launching workers. 00:16:06.527 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51664 00:16:06.527 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51725, failed to submit 62 00:16:06.527 success 51664, unsuccess 61, failed 0 00:16:06.527 21:19:40 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:06.527 21:19:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.527 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:16:06.527 21:19:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.527 21:19:40 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:06.527 21:19:40 -- target/abort.sh@38 -- # nvmftestfini 00:16:06.527 21:19:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:06.527 21:19:40 -- nvmf/common.sh@116 -- # sync 00:16:06.527 21:19:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:06.527 21:19:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:06.527 21:19:40 -- nvmf/common.sh@119 -- # set +e 00:16:06.527 21:19:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:06.527 21:19:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:06.527 rmmod nvme_rdma 00:16:06.527 rmmod nvme_fabrics 00:16:06.527 21:19:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:06.527 21:19:41 -- nvmf/common.sh@123 -- # set -e 00:16:06.527 21:19:41 -- nvmf/common.sh@124 -- # return 0 00:16:06.527 21:19:41 -- nvmf/common.sh@477 -- # '[' -n 1634599 ']' 00:16:06.527 21:19:41 -- nvmf/common.sh@478 -- # killprocess 1634599 00:16:06.527 21:19:41 -- common/autotest_common.sh@926 -- # '[' -z 1634599 ']' 00:16:06.527 21:19:41 -- common/autotest_common.sh@930 -- # kill -0 1634599 00:16:06.527 21:19:41 -- common/autotest_common.sh@931 -- # uname 00:16:06.527 21:19:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.527 21:19:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1634599 00:16:06.527 21:19:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:06.527 21:19:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:06.527 21:19:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1634599' 00:16:06.527 killing process with pid 1634599 00:16:06.527 21:19:41 -- common/autotest_common.sh@945 -- # kill 1634599 00:16:06.527 21:19:41 -- common/autotest_common.sh@950 -- # wait 1634599 00:16:06.527 21:19:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:06.527 21:19:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:06.527 00:16:06.527 real 0m11.313s 00:16:06.527 user 0m14.356s 00:16:06.527 sys 0m6.252s 00:16:06.527 21:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.527 21:19:41 -- common/autotest_common.sh@10 -- # set +x 00:16:06.527 ************************************ 00:16:06.527 END TEST nvmf_abort 00:16:06.527 ************************************ 00:16:06.527 21:19:41 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:16:06.527 21:19:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:06.527 21:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.527 21:19:41 -- common/autotest_common.sh@10 -- # set +x 00:16:06.527 ************************************ 00:16:06.527 START TEST nvmf_ns_hotplug_stress 00:16:06.527 ************************************ 00:16:06.527 21:19:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:16:06.787 * Looking for test storage... 00:16:06.787 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:06.787 21:19:41 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.787 21:19:41 -- nvmf/common.sh@7 -- # uname -s 00:16:06.787 21:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.787 21:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.787 21:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.787 21:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.787 21:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.787 21:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.787 21:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.787 21:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.787 21:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.787 21:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.787 21:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:06.787 21:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:06.787 21:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.787 21:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.787 21:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.787 21:19:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:06.787 21:19:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.787 21:19:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.787 21:19:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.787 21:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.787 21:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.787 21:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.787 21:19:41 -- paths/export.sh@5 -- # export PATH 00:16:06.787 21:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.787 21:19:41 -- nvmf/common.sh@46 -- # : 0 00:16:06.787 21:19:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:06.787 21:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:06.787 21:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:06.787 21:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.787 21:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.787 21:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:06.787 21:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:06.787 21:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:06.787 21:19:41 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:06.787 21:19:41 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:06.787 21:19:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:06.787 21:19:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.788 21:19:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:06.788 21:19:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:06.788 21:19:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:06.788 21:19:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.788 21:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.788 21:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.788 21:19:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:06.788 21:19:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:06.788 21:19:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:06.788 21:19:41 -- common/autotest_common.sh@10 -- # set +x 00:16:14.906 21:19:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:14.906 21:19:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:14.906 21:19:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:14.906 21:19:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:14.906 21:19:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:14.906 21:19:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:14.906 21:19:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:14.906 21:19:49 -- nvmf/common.sh@294 -- # net_devs=() 00:16:14.906 21:19:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:14.906 21:19:49 -- nvmf/common.sh@295 -- # e810=() 00:16:14.906 21:19:49 -- nvmf/common.sh@295 -- # local -ga e810 00:16:14.906 21:19:49 -- nvmf/common.sh@296 -- # x722=() 00:16:14.906 21:19:49 -- nvmf/common.sh@296 -- # local -ga x722 00:16:14.906 21:19:49 -- nvmf/common.sh@297 -- # mlx=() 00:16:14.906 21:19:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:14.906 21:19:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.906 21:19:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:14.906 21:19:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:14.906 21:19:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:14.906 21:19:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:14.906 21:19:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:14.906 21:19:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:14.906 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:14.906 21:19:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:14.906 21:19:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:14.906 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:14.906 21:19:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:14.906 21:19:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:14.906 21:19:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.906 21:19:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:14.906 21:19:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.906 21:19:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:14.906 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:14.906 21:19:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.906 21:19:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.906 21:19:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:14.906 21:19:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.906 21:19:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:14.906 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:14.906 21:19:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.906 21:19:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:14.906 21:19:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:14.906 21:19:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:14.906 21:19:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:14.906 21:19:49 -- nvmf/common.sh@57 -- # uname 00:16:14.906 21:19:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:14.906 21:19:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:14.906 21:19:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:14.906 21:19:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:14.906 21:19:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:14.906 21:19:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:14.906 21:19:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:14.906 21:19:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:14.906 21:19:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:14.906 21:19:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:14.906 21:19:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:14.906 21:19:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:14.906 21:19:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:14.906 21:19:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:14.906 21:19:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:14.906 21:19:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:14.906 21:19:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:14.906 21:19:49 -- nvmf/common.sh@104 -- # continue 2 00:16:14.906 21:19:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.906 21:19:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:14.906 21:19:49 -- nvmf/common.sh@104 -- # continue 2 00:16:14.906 21:19:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:14.906 21:19:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:14.906 21:19:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:14.906 21:19:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:14.906 21:19:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:14.906 21:19:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:14.906 21:19:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:14.906 21:19:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:14.906 21:19:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:14.906 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:14.907 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:14.907 altname enp217s0f0np0 00:16:14.907 altname ens818f0np0 00:16:14.907 inet 192.168.100.8/24 scope global mlx_0_0 00:16:14.907 valid_lft forever preferred_lft forever 00:16:14.907 21:19:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:14.907 21:19:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:14.907 21:19:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:14.907 21:19:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:14.907 21:19:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:14.907 21:19:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:14.907 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:14.907 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:14.907 altname enp217s0f1np1 00:16:14.907 altname ens818f1np1 00:16:14.907 inet 192.168.100.9/24 scope global mlx_0_1 00:16:14.907 valid_lft forever preferred_lft forever 00:16:14.907 21:19:49 -- nvmf/common.sh@410 -- # return 0 00:16:14.907 21:19:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.907 21:19:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:14.907 21:19:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:14.907 21:19:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:14.907 21:19:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:14.907 21:19:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:14.907 21:19:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:14.907 21:19:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:14.907 21:19:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:14.907 21:19:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:14.907 21:19:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:14.907 21:19:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.907 21:19:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:14.907 21:19:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:14.907 21:19:49 -- nvmf/common.sh@104 -- # continue 2 00:16:14.907 21:19:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:14.907 21:19:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.907 21:19:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:14.907 21:19:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.907 21:19:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:14.907 21:19:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:14.907 21:19:49 -- nvmf/common.sh@104 -- # continue 2 00:16:14.907 21:19:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:14.907 21:19:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:14.907 21:19:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:14.907 21:19:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:14.907 21:19:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:14.907 21:19:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:14.907 21:19:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:14.907 21:19:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:14.907 192.168.100.9' 00:16:14.907 21:19:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:14.907 192.168.100.9' 00:16:14.907 21:19:49 -- nvmf/common.sh@445 -- # head -n 1 00:16:14.907 21:19:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:14.907 21:19:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:14.907 192.168.100.9' 00:16:14.907 21:19:49 -- nvmf/common.sh@446 -- # tail -n +2 00:16:14.907 21:19:49 -- nvmf/common.sh@446 -- # head -n 1 00:16:14.907 21:19:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:14.907 21:19:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:14.907 21:19:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:14.907 21:19:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:14.907 21:19:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:14.907 21:19:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:14.907 21:19:49 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:14.907 21:19:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.907 21:19:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:14.907 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:16:14.907 21:19:49 -- nvmf/common.sh@469 -- # nvmfpid=1639110 00:16:14.907 21:19:49 -- nvmf/common.sh@470 -- # waitforlisten 1639110 00:16:14.907 21:19:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:14.907 21:19:49 -- common/autotest_common.sh@819 -- # '[' -z 1639110 ']' 00:16:14.907 21:19:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.907 21:19:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:14.907 21:19:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.907 21:19:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:14.907 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:16:14.907 [2024-07-26 21:19:49.587727] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:14.907 [2024-07-26 21:19:49.587782] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.907 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.907 [2024-07-26 21:19:49.671326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:14.907 [2024-07-26 21:19:49.707696] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.907 [2024-07-26 21:19:49.707825] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.907 [2024-07-26 21:19:49.707834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.907 [2024-07-26 21:19:49.707843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.907 [2024-07-26 21:19:49.707942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.907 [2024-07-26 21:19:49.708027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.907 [2024-07-26 21:19:49.708029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.845 21:19:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:15.845 21:19:50 -- common/autotest_common.sh@852 -- # return 0 00:16:15.845 21:19:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.845 21:19:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:15.845 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:16:15.845 21:19:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.845 21:19:50 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:15.845 21:19:50 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:15.845 [2024-07-26 21:19:50.613279] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f2c860/0x1f30d50) succeed. 00:16:15.845 [2024-07-26 21:19:50.623451] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f2ddb0/0x1f723e0) succeed. 00:16:16.104 21:19:50 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:16.104 21:19:50 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:16.400 [2024-07-26 21:19:51.063690] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:16.400 21:19:51 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:16.400 21:19:51 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:16.659 Malloc0 00:16:16.660 21:19:51 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:16.918 Delay0 00:16:16.918 21:19:51 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:16.918 21:19:51 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:17.178 NULL1 00:16:17.178 21:19:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:17.437 21:19:52 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:17.437 21:19:52 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1639677 00:16:17.437 21:19:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:17.437 21:19:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.437 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.813 Read completed with error (sct=0, sc=11) 00:16:18.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.813 21:19:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.813 21:19:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:18.813 21:19:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:18.813 true 00:16:18.813 21:19:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:18.813 21:19:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.750 21:19:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.009 21:19:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:20.009 21:19:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:20.009 true 00:16:20.009 21:19:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:20.009 21:19:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.945 21:19:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.205 21:19:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:21.205 21:19:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:21.205 true 00:16:21.205 21:19:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:21.205 21:19:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.141 21:19:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.399 21:19:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:22.399 21:19:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:22.399 true 00:16:22.399 21:19:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:22.399 21:19:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.334 21:19:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.592 21:19:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:23.592 21:19:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:23.592 true 00:16:23.592 21:19:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:23.592 21:19:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.528 21:19:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.787 21:19:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:24.787 21:19:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:24.787 true 00:16:24.787 21:19:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:24.787 21:19:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.722 21:20:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.981 21:20:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:25.981 21:20:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:25.981 true 00:16:25.981 21:20:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:25.981 21:20:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.916 21:20:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.175 21:20:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:27.175 21:20:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:27.175 true 00:16:27.175 21:20:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:27.175 21:20:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.110 21:20:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.369 21:20:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:28.369 21:20:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:28.369 true 00:16:28.369 21:20:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:28.369 21:20:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.306 21:20:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.565 21:20:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:29.566 21:20:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:29.566 true 00:16:29.566 21:20:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:29.566 21:20:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.503 21:20:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.762 21:20:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:30.762 21:20:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:30.762 true 00:16:30.762 21:20:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:30.762 21:20:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.699 21:20:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.957 21:20:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:31.957 21:20:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:31.957 true 00:16:31.957 21:20:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:31.957 21:20:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.215 21:20:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.504 21:20:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:32.504 21:20:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:32.504 true 00:16:32.504 21:20:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:32.504 21:20:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 21:20:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.895 21:20:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:33.895 21:20:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:34.153 true 00:16:34.153 21:20:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:34.153 21:20:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.089 21:20:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.089 21:20:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:35.089 21:20:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:35.349 true 00:16:35.349 21:20:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:35.349 21:20:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.284 21:20:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.284 21:20:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:36.284 21:20:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:36.542 true 00:16:36.543 21:20:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:36.543 21:20:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.477 21:20:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.477 21:20:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:37.477 21:20:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:37.736 true 00:16:37.736 21:20:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:37.736 21:20:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 21:20:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.670 21:20:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:38.670 21:20:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:38.928 true 00:16:38.928 21:20:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:38.928 21:20:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.866 21:20:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.866 21:20:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:39.866 21:20:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:40.125 true 00:16:40.125 21:20:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:40.125 21:20:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.063 21:20:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.063 21:20:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:41.063 21:20:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:41.322 true 00:16:41.322 21:20:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:41.322 21:20:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:42.260 21:20:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:42.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:42.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:42.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:42.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:42.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:42.260 21:20:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:42.260 21:20:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:42.518 true 00:16:42.518 21:20:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:42.518 21:20:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 21:20:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:43.456 21:20:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:43.456 21:20:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:43.715 true 00:16:43.715 21:20:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:43.715 21:20:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:44.652 21:20:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:44.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:44.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:44.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:44.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:44.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:44.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:44.652 21:20:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:44.652 21:20:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:44.912 true 00:16:44.912 21:20:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:44.912 21:20:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.850 21:20:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:45.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.850 21:20:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:45.850 21:20:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:46.109 true 00:16:46.109 21:20:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:46.109 21:20:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:47.046 21:20:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.046 21:20:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:47.046 21:20:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:47.306 true 00:16:47.306 21:20:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:47.306 21:20:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.306 21:20:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.565 21:20:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:47.565 21:20:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:47.824 true 00:16:47.824 21:20:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:47.824 21:20:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.824 21:20:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.083 21:20:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:48.083 21:20:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:48.342 true 00:16:48.342 21:20:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:48.342 21:20:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.342 21:20:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.621 21:20:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:48.621 21:20:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:48.887 true 00:16:48.887 21:20:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:48.887 21:20:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.887 21:20:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.145 21:20:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:49.145 21:20:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:49.404 true 00:16:49.404 21:20:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:49.404 21:20:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.404 21:20:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.663 Initializing NVMe Controllers 00:16:49.663 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.663 Controller IO queue size 128, less than required. 00:16:49.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:49.663 Controller IO queue size 128, less than required. 00:16:49.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:49.663 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:49.663 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:49.663 Initialization complete. Launching workers. 00:16:49.663 ======================================================== 00:16:49.663 Latency(us) 00:16:49.663 Device Information : IOPS MiB/s Average min max 00:16:49.663 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5335.38 2.61 21143.58 901.31 1132050.24 00:16:49.663 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35912.23 17.54 3564.18 1706.21 281746.30 00:16:49.663 ======================================================== 00:16:49.663 Total : 41247.61 20.14 5838.07 901.31 1132050.24 00:16:49.663 00:16:49.663 21:20:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:49.663 21:20:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:49.921 true 00:16:49.921 21:20:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1639677 00:16:49.921 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1639677) - No such process 00:16:49.921 21:20:24 -- target/ns_hotplug_stress.sh@53 -- # wait 1639677 00:16:49.921 21:20:24 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:50.180 21:20:24 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:50.180 21:20:24 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:50.180 21:20:24 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:50.180 21:20:24 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:50.180 21:20:24 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:50.180 21:20:24 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:50.439 null0 00:16:50.439 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:50.439 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:50.439 21:20:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:50.439 null1 00:16:50.697 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:50.697 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:50.697 21:20:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:50.697 null2 00:16:50.697 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:50.697 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:50.697 21:20:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:50.956 null3 00:16:50.956 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:50.956 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:50.956 21:20:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:50.956 null4 00:16:51.215 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.215 21:20:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.215 21:20:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:51.215 null5 00:16:51.215 21:20:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.215 21:20:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.215 21:20:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:51.474 null6 00:16:51.474 21:20:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.474 21:20:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.474 21:20:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:51.474 null7 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.734 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@66 -- # wait 1645689 1645690 1645692 1645694 1645696 1645698 1645700 1645702 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:51.735 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:51.995 21:20:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:52.254 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:52.255 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:52.255 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:52.255 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:52.255 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:52.255 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.255 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:52.255 21:20:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.255 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:52.514 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:52.772 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.030 21:20:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:53.289 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:53.289 21:20:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:53.289 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.289 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:53.289 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:53.289 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.289 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:53.289 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.548 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.806 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.064 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:54.323 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.323 21:20:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.323 21:20:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:54.323 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.581 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:54.582 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.582 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.839 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.840 21:20:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.098 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.356 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:55.356 21:20:30 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:55.356 21:20:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:55.356 21:20:30 -- nvmf/common.sh@116 -- # sync 00:16:55.356 21:20:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:55.356 21:20:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:55.356 21:20:30 -- nvmf/common.sh@119 -- # set +e 00:16:55.356 21:20:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:55.356 21:20:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:55.356 rmmod nvme_rdma 00:16:55.356 rmmod nvme_fabrics 00:16:55.356 21:20:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:55.356 21:20:30 -- nvmf/common.sh@123 -- # set -e 00:16:55.356 21:20:30 -- nvmf/common.sh@124 -- # return 0 00:16:55.356 21:20:30 -- nvmf/common.sh@477 -- # '[' -n 1639110 ']' 00:16:55.356 21:20:30 -- nvmf/common.sh@478 -- # killprocess 1639110 00:16:55.356 21:20:30 -- common/autotest_common.sh@926 -- # '[' -z 1639110 ']' 00:16:55.356 21:20:30 -- common/autotest_common.sh@930 -- # kill -0 1639110 00:16:55.356 21:20:30 -- common/autotest_common.sh@931 -- # uname 00:16:55.356 21:20:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:55.356 21:20:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1639110 00:16:55.356 21:20:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:55.356 21:20:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:55.356 21:20:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1639110' 00:16:55.356 killing process with pid 1639110 00:16:55.356 21:20:30 -- common/autotest_common.sh@945 -- # kill 1639110 00:16:55.356 21:20:30 -- common/autotest_common.sh@950 -- # wait 1639110 00:16:55.615 21:20:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:55.615 21:20:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:55.615 00:16:55.615 real 0m48.989s 00:16:55.615 user 3m13.285s 00:16:55.615 sys 0m15.506s 00:16:55.615 21:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.615 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:16:55.615 ************************************ 00:16:55.615 END TEST nvmf_ns_hotplug_stress 00:16:55.615 ************************************ 00:16:55.615 21:20:30 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:55.615 21:20:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:55.615 21:20:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:55.615 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:16:55.615 ************************************ 00:16:55.615 START TEST nvmf_connect_stress 00:16:55.615 ************************************ 00:16:55.615 21:20:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:55.615 * Looking for test storage... 00:16:55.875 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:55.875 21:20:30 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.875 21:20:30 -- nvmf/common.sh@7 -- # uname -s 00:16:55.875 21:20:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.875 21:20:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.875 21:20:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.875 21:20:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.875 21:20:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.875 21:20:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.875 21:20:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.875 21:20:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.875 21:20:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.875 21:20:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.875 21:20:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:55.875 21:20:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:55.875 21:20:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.875 21:20:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.875 21:20:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.875 21:20:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:55.875 21:20:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.875 21:20:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.875 21:20:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.875 21:20:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.875 21:20:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.875 21:20:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.875 21:20:30 -- paths/export.sh@5 -- # export PATH 00:16:55.875 21:20:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.875 21:20:30 -- nvmf/common.sh@46 -- # : 0 00:16:55.875 21:20:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:55.875 21:20:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:55.875 21:20:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:55.875 21:20:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.875 21:20:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.875 21:20:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:55.875 21:20:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:55.875 21:20:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:55.875 21:20:30 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:55.875 21:20:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:55.875 21:20:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.875 21:20:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:55.875 21:20:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:55.875 21:20:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:55.875 21:20:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.875 21:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.875 21:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.875 21:20:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:55.875 21:20:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:55.875 21:20:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:55.875 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:17:03.997 21:20:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:03.997 21:20:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:03.997 21:20:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:03.997 21:20:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:03.997 21:20:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:03.997 21:20:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:03.997 21:20:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:03.997 21:20:37 -- nvmf/common.sh@294 -- # net_devs=() 00:17:03.997 21:20:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:03.997 21:20:37 -- nvmf/common.sh@295 -- # e810=() 00:17:03.997 21:20:37 -- nvmf/common.sh@295 -- # local -ga e810 00:17:03.997 21:20:37 -- nvmf/common.sh@296 -- # x722=() 00:17:03.997 21:20:37 -- nvmf/common.sh@296 -- # local -ga x722 00:17:03.997 21:20:37 -- nvmf/common.sh@297 -- # mlx=() 00:17:03.997 21:20:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:03.997 21:20:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.997 21:20:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:03.997 21:20:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:03.997 21:20:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:03.997 21:20:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:03.997 21:20:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:03.997 21:20:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:03.997 21:20:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:03.997 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:03.997 21:20:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:03.997 21:20:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:03.997 21:20:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:03.997 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:03.997 21:20:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:03.997 21:20:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:03.997 21:20:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:03.997 21:20:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:03.997 21:20:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.997 21:20:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:03.997 21:20:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.997 21:20:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:03.997 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:03.997 21:20:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.997 21:20:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:03.997 21:20:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.997 21:20:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:03.997 21:20:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.997 21:20:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:03.997 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.998 21:20:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:03.998 21:20:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:03.998 21:20:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:03.998 21:20:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:03.998 21:20:37 -- nvmf/common.sh@57 -- # uname 00:17:03.998 21:20:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:03.998 21:20:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:03.998 21:20:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:03.998 21:20:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:03.998 21:20:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:03.998 21:20:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:03.998 21:20:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:03.998 21:20:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:03.998 21:20:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:03.998 21:20:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:03.998 21:20:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:03.998 21:20:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:03.998 21:20:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:03.998 21:20:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:03.998 21:20:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:03.998 21:20:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:03.998 21:20:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@104 -- # continue 2 00:17:03.998 21:20:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@104 -- # continue 2 00:17:03.998 21:20:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:03.998 21:20:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.998 21:20:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:03.998 21:20:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:03.998 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:03.998 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:03.998 altname enp217s0f0np0 00:17:03.998 altname ens818f0np0 00:17:03.998 inet 192.168.100.8/24 scope global mlx_0_0 00:17:03.998 valid_lft forever preferred_lft forever 00:17:03.998 21:20:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:03.998 21:20:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.998 21:20:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:03.998 21:20:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:03.998 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:03.998 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:03.998 altname enp217s0f1np1 00:17:03.998 altname ens818f1np1 00:17:03.998 inet 192.168.100.9/24 scope global mlx_0_1 00:17:03.998 valid_lft forever preferred_lft forever 00:17:03.998 21:20:37 -- nvmf/common.sh@410 -- # return 0 00:17:03.998 21:20:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:03.998 21:20:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:03.998 21:20:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:03.998 21:20:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:03.998 21:20:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:03.998 21:20:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:03.998 21:20:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:03.998 21:20:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:03.998 21:20:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:03.998 21:20:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@104 -- # continue 2 00:17:03.998 21:20:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.998 21:20:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:03.998 21:20:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@104 -- # continue 2 00:17:03.998 21:20:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:03.998 21:20:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.998 21:20:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:03.998 21:20:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:03.998 21:20:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:03.998 21:20:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.998 21:20:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.998 21:20:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:03.998 192.168.100.9' 00:17:03.998 21:20:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:03.998 192.168.100.9' 00:17:03.998 21:20:38 -- nvmf/common.sh@445 -- # head -n 1 00:17:03.998 21:20:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:03.998 21:20:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:03.998 192.168.100.9' 00:17:03.998 21:20:38 -- nvmf/common.sh@446 -- # tail -n +2 00:17:03.998 21:20:38 -- nvmf/common.sh@446 -- # head -n 1 00:17:03.998 21:20:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:03.998 21:20:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:03.998 21:20:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:03.998 21:20:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:03.998 21:20:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:03.998 21:20:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:03.998 21:20:38 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:03.998 21:20:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:03.998 21:20:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:03.998 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.998 21:20:38 -- nvmf/common.sh@469 -- # nvmfpid=1650322 00:17:03.998 21:20:38 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:03.998 21:20:38 -- nvmf/common.sh@470 -- # waitforlisten 1650322 00:17:03.998 21:20:38 -- common/autotest_common.sh@819 -- # '[' -z 1650322 ']' 00:17:03.998 21:20:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.998 21:20:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.998 21:20:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.998 21:20:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.998 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.998 [2024-07-26 21:20:38.115135] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:03.998 [2024-07-26 21:20:38.115193] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.998 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.998 [2024-07-26 21:20:38.202908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:03.998 [2024-07-26 21:20:38.239775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:03.998 [2024-07-26 21:20:38.239891] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.998 [2024-07-26 21:20:38.239901] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.998 [2024-07-26 21:20:38.239909] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.998 [2024-07-26 21:20:38.239951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.998 [2024-07-26 21:20:38.240053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.998 [2024-07-26 21:20:38.240055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.258 21:20:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.258 21:20:38 -- common/autotest_common.sh@852 -- # return 0 00:17:04.258 21:20:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:04.258 21:20:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:04.258 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.258 21:20:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.258 21:20:38 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:04.258 21:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.258 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.258 [2024-07-26 21:20:38.994028] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbc6860/0xbcad50) succeed. 00:17:04.258 [2024-07-26 21:20:39.004715] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbc7db0/0xc0c3e0) succeed. 00:17:04.258 21:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.258 21:20:39 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:04.258 21:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.258 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.258 21:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.258 21:20:39 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:04.258 21:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.258 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.258 [2024-07-26 21:20:39.115919] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:04.258 21:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.258 21:20:39 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:04.258 21:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.258 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.258 NULL1 00:17:04.517 21:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.517 21:20:39 -- target/connect_stress.sh@21 -- # PERF_PID=1650609 00:17:04.517 21:20:39 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:04.517 21:20:39 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:04.517 21:20:39 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # seq 1 20 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.517 21:20:39 -- target/connect_stress.sh@28 -- # cat 00:17:04.517 21:20:39 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:04.517 21:20:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.517 21:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.517 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.777 21:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.777 21:20:39 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:04.777 21:20:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.777 21:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.777 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.036 21:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.036 21:20:39 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:05.036 21:20:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.036 21:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.036 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.604 21:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.604 21:20:40 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:05.604 21:20:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.604 21:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.604 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:05.864 21:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.864 21:20:40 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:05.864 21:20:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.864 21:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.864 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.122 21:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.122 21:20:40 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:06.122 21:20:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.122 21:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.122 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.381 21:20:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.382 21:20:41 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:06.382 21:20:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.382 21:20:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.382 21:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:06.641 21:20:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.641 21:20:41 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:06.641 21:20:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.641 21:20:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.641 21:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.209 21:20:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:07.209 21:20:41 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:07.209 21:20:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.209 21:20:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:07.209 21:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.500 21:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:07.500 21:20:42 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:07.500 21:20:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.500 21:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:07.500 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:17:07.766 21:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:07.766 21:20:42 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:07.766 21:20:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.766 21:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:07.766 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.025 21:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:08.025 21:20:42 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:08.025 21:20:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.025 21:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:08.025 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.284 21:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:08.284 21:20:43 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:08.284 21:20:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.284 21:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:08.284 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:17:08.851 21:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:08.851 21:20:43 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:08.851 21:20:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.851 21:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:08.851 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.109 21:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.109 21:20:43 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:09.109 21:20:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.109 21:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.109 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.367 21:20:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.367 21:20:44 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:09.367 21:20:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.367 21:20:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.367 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:17:09.626 21:20:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.627 21:20:44 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:09.627 21:20:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.627 21:20:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.627 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:17:09.886 21:20:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.886 21:20:44 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:09.886 21:20:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.886 21:20:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.886 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.455 21:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.455 21:20:45 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:10.455 21:20:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.455 21:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.455 21:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:10.714 21:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.714 21:20:45 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:10.714 21:20:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.714 21:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.714 21:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:10.974 21:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.974 21:20:45 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:10.974 21:20:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.974 21:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.974 21:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.233 21:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.233 21:20:46 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:11.233 21:20:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.233 21:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.233 21:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:11.801 21:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.801 21:20:46 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:11.801 21:20:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.801 21:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.801 21:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:12.060 21:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:12.060 21:20:46 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:12.060 21:20:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.060 21:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:12.060 21:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:12.319 21:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:12.319 21:20:47 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:12.319 21:20:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.319 21:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:12.319 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:17:12.578 21:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:12.578 21:20:47 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:12.578 21:20:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.578 21:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:12.578 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:17:12.837 21:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:12.837 21:20:47 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:12.837 21:20:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.837 21:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:12.837 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:17:13.405 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:13.405 21:20:48 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:13.405 21:20:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.405 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:13.405 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:17:13.665 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:13.665 21:20:48 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:13.665 21:20:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.665 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:13.665 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:17:13.924 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:13.924 21:20:48 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:13.924 21:20:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.924 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:13.924 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.183 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.183 21:20:48 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:14.183 21:20:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.183 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:14.183 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.441 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:14.700 21:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.700 21:20:49 -- target/connect_stress.sh@34 -- # kill -0 1650609 00:17:14.700 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1650609) - No such process 00:17:14.700 21:20:49 -- target/connect_stress.sh@38 -- # wait 1650609 00:17:14.700 21:20:49 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.700 21:20:49 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:14.700 21:20:49 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:14.700 21:20:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:14.700 21:20:49 -- nvmf/common.sh@116 -- # sync 00:17:14.700 21:20:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:14.700 21:20:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:14.700 21:20:49 -- nvmf/common.sh@119 -- # set +e 00:17:14.700 21:20:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:14.700 21:20:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:14.700 rmmod nvme_rdma 00:17:14.700 rmmod nvme_fabrics 00:17:14.700 21:20:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:14.700 21:20:49 -- nvmf/common.sh@123 -- # set -e 00:17:14.700 21:20:49 -- nvmf/common.sh@124 -- # return 0 00:17:14.700 21:20:49 -- nvmf/common.sh@477 -- # '[' -n 1650322 ']' 00:17:14.700 21:20:49 -- nvmf/common.sh@478 -- # killprocess 1650322 00:17:14.700 21:20:49 -- common/autotest_common.sh@926 -- # '[' -z 1650322 ']' 00:17:14.700 21:20:49 -- common/autotest_common.sh@930 -- # kill -0 1650322 00:17:14.700 21:20:49 -- common/autotest_common.sh@931 -- # uname 00:17:14.700 21:20:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:14.700 21:20:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1650322 00:17:14.700 21:20:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:14.700 21:20:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:14.700 21:20:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1650322' 00:17:14.700 killing process with pid 1650322 00:17:14.700 21:20:49 -- common/autotest_common.sh@945 -- # kill 1650322 00:17:14.700 21:20:49 -- common/autotest_common.sh@950 -- # wait 1650322 00:17:14.959 21:20:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:14.959 21:20:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:14.959 00:17:14.959 real 0m19.260s 00:17:14.959 user 0m41.533s 00:17:14.959 sys 0m8.471s 00:17:14.959 21:20:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.959 21:20:49 -- common/autotest_common.sh@10 -- # set +x 00:17:14.959 ************************************ 00:17:14.959 END TEST nvmf_connect_stress 00:17:14.959 ************************************ 00:17:14.959 21:20:49 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:14.959 21:20:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:14.959 21:20:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:14.959 21:20:49 -- common/autotest_common.sh@10 -- # set +x 00:17:14.959 ************************************ 00:17:14.959 START TEST nvmf_fused_ordering 00:17:14.959 ************************************ 00:17:14.959 21:20:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:14.959 * Looking for test storage... 00:17:14.959 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:14.959 21:20:49 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.959 21:20:49 -- nvmf/common.sh@7 -- # uname -s 00:17:14.959 21:20:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.959 21:20:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.959 21:20:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.959 21:20:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.959 21:20:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.959 21:20:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.959 21:20:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.959 21:20:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.959 21:20:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.959 21:20:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.218 21:20:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:15.218 21:20:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:15.218 21:20:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.218 21:20:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.218 21:20:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.218 21:20:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:15.218 21:20:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.218 21:20:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.218 21:20:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.218 21:20:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.218 21:20:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.218 21:20:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.218 21:20:49 -- paths/export.sh@5 -- # export PATH 00:17:15.218 21:20:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.218 21:20:49 -- nvmf/common.sh@46 -- # : 0 00:17:15.218 21:20:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:15.218 21:20:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:15.218 21:20:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:15.218 21:20:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.218 21:20:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.218 21:20:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:15.218 21:20:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:15.218 21:20:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:15.218 21:20:49 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:15.218 21:20:49 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:15.218 21:20:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.218 21:20:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:15.218 21:20:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:15.218 21:20:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:15.218 21:20:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.218 21:20:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.218 21:20:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.218 21:20:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:15.218 21:20:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:15.218 21:20:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:15.218 21:20:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.336 21:20:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:23.336 21:20:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:23.336 21:20:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:23.336 21:20:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:23.336 21:20:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:23.336 21:20:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:23.336 21:20:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:23.336 21:20:57 -- nvmf/common.sh@294 -- # net_devs=() 00:17:23.336 21:20:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:23.336 21:20:57 -- nvmf/common.sh@295 -- # e810=() 00:17:23.336 21:20:57 -- nvmf/common.sh@295 -- # local -ga e810 00:17:23.336 21:20:57 -- nvmf/common.sh@296 -- # x722=() 00:17:23.336 21:20:57 -- nvmf/common.sh@296 -- # local -ga x722 00:17:23.336 21:20:57 -- nvmf/common.sh@297 -- # mlx=() 00:17:23.336 21:20:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:23.336 21:20:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.336 21:20:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.337 21:20:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.337 21:20:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:23.337 21:20:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:23.337 21:20:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:23.337 21:20:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:23.337 21:20:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:23.337 21:20:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:23.337 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:23.337 21:20:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:23.337 21:20:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:23.337 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:23.337 21:20:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:23.337 21:20:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:23.337 21:20:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.337 21:20:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:23.337 21:20:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.337 21:20:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:23.337 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.337 21:20:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.337 21:20:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:23.337 21:20:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.337 21:20:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:23.337 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.337 21:20:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:23.337 21:20:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:23.337 21:20:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:23.337 21:20:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:23.337 21:20:57 -- nvmf/common.sh@57 -- # uname 00:17:23.337 21:20:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:23.337 21:20:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:23.337 21:20:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:23.337 21:20:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:23.337 21:20:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:23.337 21:20:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:23.337 21:20:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:23.337 21:20:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:23.337 21:20:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:23.337 21:20:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:23.337 21:20:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:23.337 21:20:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:23.337 21:20:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:23.337 21:20:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:23.337 21:20:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:23.337 21:20:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:23.337 21:20:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@104 -- # continue 2 00:17:23.337 21:20:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@104 -- # continue 2 00:17:23.337 21:20:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:23.337 21:20:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.337 21:20:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:23.337 21:20:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:23.337 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:23.337 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:23.337 altname enp217s0f0np0 00:17:23.337 altname ens818f0np0 00:17:23.337 inet 192.168.100.8/24 scope global mlx_0_0 00:17:23.337 valid_lft forever preferred_lft forever 00:17:23.337 21:20:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:23.337 21:20:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.337 21:20:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:23.337 21:20:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:23.337 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:23.337 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:23.337 altname enp217s0f1np1 00:17:23.337 altname ens818f1np1 00:17:23.337 inet 192.168.100.9/24 scope global mlx_0_1 00:17:23.337 valid_lft forever preferred_lft forever 00:17:23.337 21:20:57 -- nvmf/common.sh@410 -- # return 0 00:17:23.337 21:20:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:23.337 21:20:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:23.337 21:20:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:23.337 21:20:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:23.337 21:20:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:23.337 21:20:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:23.337 21:20:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:23.337 21:20:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:23.337 21:20:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:23.337 21:20:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@104 -- # continue 2 00:17:23.337 21:20:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.337 21:20:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:23.337 21:20:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@104 -- # continue 2 00:17:23.337 21:20:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:23.337 21:20:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.337 21:20:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:23.337 21:20:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.337 21:20:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.337 21:20:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:23.337 192.168.100.9' 00:17:23.337 21:20:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:23.337 192.168.100.9' 00:17:23.337 21:20:57 -- nvmf/common.sh@445 -- # head -n 1 00:17:23.337 21:20:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:23.337 21:20:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:23.337 192.168.100.9' 00:17:23.337 21:20:57 -- nvmf/common.sh@446 -- # tail -n +2 00:17:23.337 21:20:57 -- nvmf/common.sh@446 -- # head -n 1 00:17:23.337 21:20:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:23.337 21:20:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:23.337 21:20:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:23.337 21:20:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:23.338 21:20:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:23.338 21:20:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:23.338 21:20:57 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:23.338 21:20:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:23.338 21:20:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:23.338 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.338 21:20:57 -- nvmf/common.sh@469 -- # nvmfpid=1656443 00:17:23.338 21:20:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:23.338 21:20:57 -- nvmf/common.sh@470 -- # waitforlisten 1656443 00:17:23.338 21:20:57 -- common/autotest_common.sh@819 -- # '[' -z 1656443 ']' 00:17:23.338 21:20:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.338 21:20:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:23.338 21:20:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.338 21:20:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:23.338 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.338 [2024-07-26 21:20:58.000043] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:23.338 [2024-07-26 21:20:58.000098] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.338 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.338 [2024-07-26 21:20:58.083362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.338 [2024-07-26 21:20:58.118668] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:23.338 [2024-07-26 21:20:58.118781] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.338 [2024-07-26 21:20:58.118791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.338 [2024-07-26 21:20:58.118799] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.338 [2024-07-26 21:20:58.118819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.275 21:20:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:24.275 21:20:58 -- common/autotest_common.sh@852 -- # return 0 00:17:24.275 21:20:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:24.275 21:20:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:24.275 21:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 21:20:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.275 21:20:58 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:24.275 21:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.275 21:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 [2024-07-26 21:20:58.853960] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13fa250/0x13fe740) succeed. 00:17:24.275 [2024-07-26 21:20:58.862791] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13fb750/0x143fdd0) succeed. 00:17:24.275 21:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.275 21:20:58 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:24.275 21:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.275 21:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 21:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.275 21:20:58 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:24.275 21:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.275 21:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 [2024-07-26 21:20:58.920002] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:24.275 21:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.275 21:20:58 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:24.275 21:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.275 21:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 NULL1 00:17:24.275 21:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.275 21:20:58 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:24.275 21:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.275 21:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 21:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.275 21:20:58 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:24.275 21:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.275 21:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:24.275 21:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.276 21:20:58 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:24.276 [2024-07-26 21:20:58.975139] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:24.276 [2024-07-26 21:20:58.975208] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656550 ] 00:17:24.276 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.535 Attached to nqn.2016-06.io.spdk:cnode1 00:17:24.536 Namespace ID: 1 size: 1GB 00:17:24.536 fused_ordering(0) 00:17:24.536 fused_ordering(1) 00:17:24.536 fused_ordering(2) 00:17:24.536 fused_ordering(3) 00:17:24.536 fused_ordering(4) 00:17:24.536 fused_ordering(5) 00:17:24.536 fused_ordering(6) 00:17:24.536 fused_ordering(7) 00:17:24.536 fused_ordering(8) 00:17:24.536 fused_ordering(9) 00:17:24.536 fused_ordering(10) 00:17:24.536 fused_ordering(11) 00:17:24.536 fused_ordering(12) 00:17:24.536 fused_ordering(13) 00:17:24.536 fused_ordering(14) 00:17:24.536 fused_ordering(15) 00:17:24.536 fused_ordering(16) 00:17:24.536 fused_ordering(17) 00:17:24.536 fused_ordering(18) 00:17:24.536 fused_ordering(19) 00:17:24.536 fused_ordering(20) 00:17:24.536 fused_ordering(21) 00:17:24.536 fused_ordering(22) 00:17:24.536 fused_ordering(23) 00:17:24.536 fused_ordering(24) 00:17:24.536 fused_ordering(25) 00:17:24.536 fused_ordering(26) 00:17:24.536 fused_ordering(27) 00:17:24.536 fused_ordering(28) 00:17:24.536 fused_ordering(29) 00:17:24.536 fused_ordering(30) 00:17:24.536 fused_ordering(31) 00:17:24.536 fused_ordering(32) 00:17:24.536 fused_ordering(33) 00:17:24.536 fused_ordering(34) 00:17:24.536 fused_ordering(35) 00:17:24.536 fused_ordering(36) 00:17:24.536 fused_ordering(37) 00:17:24.536 fused_ordering(38) 00:17:24.536 fused_ordering(39) 00:17:24.536 fused_ordering(40) 00:17:24.536 fused_ordering(41) 00:17:24.536 fused_ordering(42) 00:17:24.536 fused_ordering(43) 00:17:24.536 fused_ordering(44) 00:17:24.536 fused_ordering(45) 00:17:24.536 fused_ordering(46) 00:17:24.536 fused_ordering(47) 00:17:24.536 fused_ordering(48) 00:17:24.536 fused_ordering(49) 00:17:24.536 fused_ordering(50) 00:17:24.536 fused_ordering(51) 00:17:24.536 fused_ordering(52) 00:17:24.536 fused_ordering(53) 00:17:24.536 fused_ordering(54) 00:17:24.536 fused_ordering(55) 00:17:24.536 fused_ordering(56) 00:17:24.536 fused_ordering(57) 00:17:24.536 fused_ordering(58) 00:17:24.536 fused_ordering(59) 00:17:24.536 fused_ordering(60) 00:17:24.536 fused_ordering(61) 00:17:24.536 fused_ordering(62) 00:17:24.536 fused_ordering(63) 00:17:24.536 fused_ordering(64) 00:17:24.536 fused_ordering(65) 00:17:24.536 fused_ordering(66) 00:17:24.536 fused_ordering(67) 00:17:24.536 fused_ordering(68) 00:17:24.536 fused_ordering(69) 00:17:24.536 fused_ordering(70) 00:17:24.536 fused_ordering(71) 00:17:24.536 fused_ordering(72) 00:17:24.536 fused_ordering(73) 00:17:24.536 fused_ordering(74) 00:17:24.536 fused_ordering(75) 00:17:24.536 fused_ordering(76) 00:17:24.536 fused_ordering(77) 00:17:24.536 fused_ordering(78) 00:17:24.536 fused_ordering(79) 00:17:24.536 fused_ordering(80) 00:17:24.536 fused_ordering(81) 00:17:24.536 fused_ordering(82) 00:17:24.536 fused_ordering(83) 00:17:24.536 fused_ordering(84) 00:17:24.536 fused_ordering(85) 00:17:24.536 fused_ordering(86) 00:17:24.536 fused_ordering(87) 00:17:24.536 fused_ordering(88) 00:17:24.536 fused_ordering(89) 00:17:24.536 fused_ordering(90) 00:17:24.536 fused_ordering(91) 00:17:24.536 fused_ordering(92) 00:17:24.536 fused_ordering(93) 00:17:24.536 fused_ordering(94) 00:17:24.536 fused_ordering(95) 00:17:24.536 fused_ordering(96) 00:17:24.536 fused_ordering(97) 00:17:24.536 fused_ordering(98) 00:17:24.536 fused_ordering(99) 00:17:24.536 fused_ordering(100) 00:17:24.536 fused_ordering(101) 00:17:24.536 fused_ordering(102) 00:17:24.536 fused_ordering(103) 00:17:24.536 fused_ordering(104) 00:17:24.536 fused_ordering(105) 00:17:24.536 fused_ordering(106) 00:17:24.536 fused_ordering(107) 00:17:24.536 fused_ordering(108) 00:17:24.536 fused_ordering(109) 00:17:24.536 fused_ordering(110) 00:17:24.536 fused_ordering(111) 00:17:24.536 fused_ordering(112) 00:17:24.536 fused_ordering(113) 00:17:24.536 fused_ordering(114) 00:17:24.536 fused_ordering(115) 00:17:24.536 fused_ordering(116) 00:17:24.536 fused_ordering(117) 00:17:24.536 fused_ordering(118) 00:17:24.536 fused_ordering(119) 00:17:24.536 fused_ordering(120) 00:17:24.536 fused_ordering(121) 00:17:24.536 fused_ordering(122) 00:17:24.536 fused_ordering(123) 00:17:24.536 fused_ordering(124) 00:17:24.536 fused_ordering(125) 00:17:24.536 fused_ordering(126) 00:17:24.536 fused_ordering(127) 00:17:24.536 fused_ordering(128) 00:17:24.536 fused_ordering(129) 00:17:24.536 fused_ordering(130) 00:17:24.536 fused_ordering(131) 00:17:24.536 fused_ordering(132) 00:17:24.536 fused_ordering(133) 00:17:24.536 fused_ordering(134) 00:17:24.536 fused_ordering(135) 00:17:24.536 fused_ordering(136) 00:17:24.536 fused_ordering(137) 00:17:24.536 fused_ordering(138) 00:17:24.536 fused_ordering(139) 00:17:24.536 fused_ordering(140) 00:17:24.536 fused_ordering(141) 00:17:24.536 fused_ordering(142) 00:17:24.536 fused_ordering(143) 00:17:24.536 fused_ordering(144) 00:17:24.536 fused_ordering(145) 00:17:24.536 fused_ordering(146) 00:17:24.536 fused_ordering(147) 00:17:24.536 fused_ordering(148) 00:17:24.536 fused_ordering(149) 00:17:24.536 fused_ordering(150) 00:17:24.536 fused_ordering(151) 00:17:24.536 fused_ordering(152) 00:17:24.536 fused_ordering(153) 00:17:24.536 fused_ordering(154) 00:17:24.536 fused_ordering(155) 00:17:24.536 fused_ordering(156) 00:17:24.536 fused_ordering(157) 00:17:24.536 fused_ordering(158) 00:17:24.536 fused_ordering(159) 00:17:24.536 fused_ordering(160) 00:17:24.536 fused_ordering(161) 00:17:24.536 fused_ordering(162) 00:17:24.536 fused_ordering(163) 00:17:24.536 fused_ordering(164) 00:17:24.536 fused_ordering(165) 00:17:24.536 fused_ordering(166) 00:17:24.536 fused_ordering(167) 00:17:24.536 fused_ordering(168) 00:17:24.536 fused_ordering(169) 00:17:24.536 fused_ordering(170) 00:17:24.536 fused_ordering(171) 00:17:24.536 fused_ordering(172) 00:17:24.536 fused_ordering(173) 00:17:24.536 fused_ordering(174) 00:17:24.536 fused_ordering(175) 00:17:24.536 fused_ordering(176) 00:17:24.536 fused_ordering(177) 00:17:24.536 fused_ordering(178) 00:17:24.536 fused_ordering(179) 00:17:24.536 fused_ordering(180) 00:17:24.536 fused_ordering(181) 00:17:24.536 fused_ordering(182) 00:17:24.536 fused_ordering(183) 00:17:24.536 fused_ordering(184) 00:17:24.536 fused_ordering(185) 00:17:24.536 fused_ordering(186) 00:17:24.536 fused_ordering(187) 00:17:24.536 fused_ordering(188) 00:17:24.536 fused_ordering(189) 00:17:24.536 fused_ordering(190) 00:17:24.536 fused_ordering(191) 00:17:24.536 fused_ordering(192) 00:17:24.536 fused_ordering(193) 00:17:24.536 fused_ordering(194) 00:17:24.536 fused_ordering(195) 00:17:24.536 fused_ordering(196) 00:17:24.536 fused_ordering(197) 00:17:24.536 fused_ordering(198) 00:17:24.536 fused_ordering(199) 00:17:24.536 fused_ordering(200) 00:17:24.536 fused_ordering(201) 00:17:24.536 fused_ordering(202) 00:17:24.536 fused_ordering(203) 00:17:24.536 fused_ordering(204) 00:17:24.536 fused_ordering(205) 00:17:24.536 fused_ordering(206) 00:17:24.536 fused_ordering(207) 00:17:24.536 fused_ordering(208) 00:17:24.536 fused_ordering(209) 00:17:24.536 fused_ordering(210) 00:17:24.536 fused_ordering(211) 00:17:24.536 fused_ordering(212) 00:17:24.536 fused_ordering(213) 00:17:24.536 fused_ordering(214) 00:17:24.536 fused_ordering(215) 00:17:24.536 fused_ordering(216) 00:17:24.536 fused_ordering(217) 00:17:24.536 fused_ordering(218) 00:17:24.536 fused_ordering(219) 00:17:24.536 fused_ordering(220) 00:17:24.536 fused_ordering(221) 00:17:24.536 fused_ordering(222) 00:17:24.536 fused_ordering(223) 00:17:24.536 fused_ordering(224) 00:17:24.536 fused_ordering(225) 00:17:24.536 fused_ordering(226) 00:17:24.536 fused_ordering(227) 00:17:24.536 fused_ordering(228) 00:17:24.536 fused_ordering(229) 00:17:24.536 fused_ordering(230) 00:17:24.536 fused_ordering(231) 00:17:24.536 fused_ordering(232) 00:17:24.536 fused_ordering(233) 00:17:24.536 fused_ordering(234) 00:17:24.536 fused_ordering(235) 00:17:24.536 fused_ordering(236) 00:17:24.536 fused_ordering(237) 00:17:24.536 fused_ordering(238) 00:17:24.536 fused_ordering(239) 00:17:24.536 fused_ordering(240) 00:17:24.536 fused_ordering(241) 00:17:24.536 fused_ordering(242) 00:17:24.536 fused_ordering(243) 00:17:24.536 fused_ordering(244) 00:17:24.536 fused_ordering(245) 00:17:24.536 fused_ordering(246) 00:17:24.536 fused_ordering(247) 00:17:24.536 fused_ordering(248) 00:17:24.536 fused_ordering(249) 00:17:24.536 fused_ordering(250) 00:17:24.536 fused_ordering(251) 00:17:24.536 fused_ordering(252) 00:17:24.536 fused_ordering(253) 00:17:24.536 fused_ordering(254) 00:17:24.536 fused_ordering(255) 00:17:24.536 fused_ordering(256) 00:17:24.536 fused_ordering(257) 00:17:24.536 fused_ordering(258) 00:17:24.536 fused_ordering(259) 00:17:24.536 fused_ordering(260) 00:17:24.536 fused_ordering(261) 00:17:24.536 fused_ordering(262) 00:17:24.536 fused_ordering(263) 00:17:24.536 fused_ordering(264) 00:17:24.536 fused_ordering(265) 00:17:24.537 fused_ordering(266) 00:17:24.537 fused_ordering(267) 00:17:24.537 fused_ordering(268) 00:17:24.537 fused_ordering(269) 00:17:24.537 fused_ordering(270) 00:17:24.537 fused_ordering(271) 00:17:24.537 fused_ordering(272) 00:17:24.537 fused_ordering(273) 00:17:24.537 fused_ordering(274) 00:17:24.537 fused_ordering(275) 00:17:24.537 fused_ordering(276) 00:17:24.537 fused_ordering(277) 00:17:24.537 fused_ordering(278) 00:17:24.537 fused_ordering(279) 00:17:24.537 fused_ordering(280) 00:17:24.537 fused_ordering(281) 00:17:24.537 fused_ordering(282) 00:17:24.537 fused_ordering(283) 00:17:24.537 fused_ordering(284) 00:17:24.537 fused_ordering(285) 00:17:24.537 fused_ordering(286) 00:17:24.537 fused_ordering(287) 00:17:24.537 fused_ordering(288) 00:17:24.537 fused_ordering(289) 00:17:24.537 fused_ordering(290) 00:17:24.537 fused_ordering(291) 00:17:24.537 fused_ordering(292) 00:17:24.537 fused_ordering(293) 00:17:24.537 fused_ordering(294) 00:17:24.537 fused_ordering(295) 00:17:24.537 fused_ordering(296) 00:17:24.537 fused_ordering(297) 00:17:24.537 fused_ordering(298) 00:17:24.537 fused_ordering(299) 00:17:24.537 fused_ordering(300) 00:17:24.537 fused_ordering(301) 00:17:24.537 fused_ordering(302) 00:17:24.537 fused_ordering(303) 00:17:24.537 fused_ordering(304) 00:17:24.537 fused_ordering(305) 00:17:24.537 fused_ordering(306) 00:17:24.537 fused_ordering(307) 00:17:24.537 fused_ordering(308) 00:17:24.537 fused_ordering(309) 00:17:24.537 fused_ordering(310) 00:17:24.537 fused_ordering(311) 00:17:24.537 fused_ordering(312) 00:17:24.537 fused_ordering(313) 00:17:24.537 fused_ordering(314) 00:17:24.537 fused_ordering(315) 00:17:24.537 fused_ordering(316) 00:17:24.537 fused_ordering(317) 00:17:24.537 fused_ordering(318) 00:17:24.537 fused_ordering(319) 00:17:24.537 fused_ordering(320) 00:17:24.537 fused_ordering(321) 00:17:24.537 fused_ordering(322) 00:17:24.537 fused_ordering(323) 00:17:24.537 fused_ordering(324) 00:17:24.537 fused_ordering(325) 00:17:24.537 fused_ordering(326) 00:17:24.537 fused_ordering(327) 00:17:24.537 fused_ordering(328) 00:17:24.537 fused_ordering(329) 00:17:24.537 fused_ordering(330) 00:17:24.537 fused_ordering(331) 00:17:24.537 fused_ordering(332) 00:17:24.537 fused_ordering(333) 00:17:24.537 fused_ordering(334) 00:17:24.537 fused_ordering(335) 00:17:24.537 fused_ordering(336) 00:17:24.537 fused_ordering(337) 00:17:24.537 fused_ordering(338) 00:17:24.537 fused_ordering(339) 00:17:24.537 fused_ordering(340) 00:17:24.537 fused_ordering(341) 00:17:24.537 fused_ordering(342) 00:17:24.537 fused_ordering(343) 00:17:24.537 fused_ordering(344) 00:17:24.537 fused_ordering(345) 00:17:24.537 fused_ordering(346) 00:17:24.537 fused_ordering(347) 00:17:24.537 fused_ordering(348) 00:17:24.537 fused_ordering(349) 00:17:24.537 fused_ordering(350) 00:17:24.537 fused_ordering(351) 00:17:24.537 fused_ordering(352) 00:17:24.537 fused_ordering(353) 00:17:24.537 fused_ordering(354) 00:17:24.537 fused_ordering(355) 00:17:24.537 fused_ordering(356) 00:17:24.537 fused_ordering(357) 00:17:24.537 fused_ordering(358) 00:17:24.537 fused_ordering(359) 00:17:24.537 fused_ordering(360) 00:17:24.537 fused_ordering(361) 00:17:24.537 fused_ordering(362) 00:17:24.537 fused_ordering(363) 00:17:24.537 fused_ordering(364) 00:17:24.537 fused_ordering(365) 00:17:24.537 fused_ordering(366) 00:17:24.537 fused_ordering(367) 00:17:24.537 fused_ordering(368) 00:17:24.537 fused_ordering(369) 00:17:24.537 fused_ordering(370) 00:17:24.537 fused_ordering(371) 00:17:24.537 fused_ordering(372) 00:17:24.537 fused_ordering(373) 00:17:24.537 fused_ordering(374) 00:17:24.537 fused_ordering(375) 00:17:24.537 fused_ordering(376) 00:17:24.537 fused_ordering(377) 00:17:24.537 fused_ordering(378) 00:17:24.537 fused_ordering(379) 00:17:24.537 fused_ordering(380) 00:17:24.537 fused_ordering(381) 00:17:24.537 fused_ordering(382) 00:17:24.537 fused_ordering(383) 00:17:24.537 fused_ordering(384) 00:17:24.537 fused_ordering(385) 00:17:24.537 fused_ordering(386) 00:17:24.537 fused_ordering(387) 00:17:24.537 fused_ordering(388) 00:17:24.537 fused_ordering(389) 00:17:24.537 fused_ordering(390) 00:17:24.537 fused_ordering(391) 00:17:24.537 fused_ordering(392) 00:17:24.537 fused_ordering(393) 00:17:24.537 fused_ordering(394) 00:17:24.537 fused_ordering(395) 00:17:24.537 fused_ordering(396) 00:17:24.537 fused_ordering(397) 00:17:24.537 fused_ordering(398) 00:17:24.537 fused_ordering(399) 00:17:24.537 fused_ordering(400) 00:17:24.537 fused_ordering(401) 00:17:24.537 fused_ordering(402) 00:17:24.537 fused_ordering(403) 00:17:24.537 fused_ordering(404) 00:17:24.537 fused_ordering(405) 00:17:24.537 fused_ordering(406) 00:17:24.537 fused_ordering(407) 00:17:24.537 fused_ordering(408) 00:17:24.537 fused_ordering(409) 00:17:24.537 fused_ordering(410) 00:17:24.537 fused_ordering(411) 00:17:24.537 fused_ordering(412) 00:17:24.537 fused_ordering(413) 00:17:24.537 fused_ordering(414) 00:17:24.537 fused_ordering(415) 00:17:24.537 fused_ordering(416) 00:17:24.537 fused_ordering(417) 00:17:24.537 fused_ordering(418) 00:17:24.537 fused_ordering(419) 00:17:24.537 fused_ordering(420) 00:17:24.537 fused_ordering(421) 00:17:24.537 fused_ordering(422) 00:17:24.537 fused_ordering(423) 00:17:24.537 fused_ordering(424) 00:17:24.537 fused_ordering(425) 00:17:24.537 fused_ordering(426) 00:17:24.537 fused_ordering(427) 00:17:24.537 fused_ordering(428) 00:17:24.537 fused_ordering(429) 00:17:24.537 fused_ordering(430) 00:17:24.537 fused_ordering(431) 00:17:24.537 fused_ordering(432) 00:17:24.537 fused_ordering(433) 00:17:24.537 fused_ordering(434) 00:17:24.537 fused_ordering(435) 00:17:24.537 fused_ordering(436) 00:17:24.537 fused_ordering(437) 00:17:24.537 fused_ordering(438) 00:17:24.537 fused_ordering(439) 00:17:24.537 fused_ordering(440) 00:17:24.537 fused_ordering(441) 00:17:24.537 fused_ordering(442) 00:17:24.537 fused_ordering(443) 00:17:24.537 fused_ordering(444) 00:17:24.537 fused_ordering(445) 00:17:24.537 fused_ordering(446) 00:17:24.537 fused_ordering(447) 00:17:24.537 fused_ordering(448) 00:17:24.537 fused_ordering(449) 00:17:24.537 fused_ordering(450) 00:17:24.537 fused_ordering(451) 00:17:24.537 fused_ordering(452) 00:17:24.537 fused_ordering(453) 00:17:24.537 fused_ordering(454) 00:17:24.537 fused_ordering(455) 00:17:24.537 fused_ordering(456) 00:17:24.537 fused_ordering(457) 00:17:24.537 fused_ordering(458) 00:17:24.537 fused_ordering(459) 00:17:24.537 fused_ordering(460) 00:17:24.537 fused_ordering(461) 00:17:24.537 fused_ordering(462) 00:17:24.537 fused_ordering(463) 00:17:24.537 fused_ordering(464) 00:17:24.537 fused_ordering(465) 00:17:24.537 fused_ordering(466) 00:17:24.537 fused_ordering(467) 00:17:24.537 fused_ordering(468) 00:17:24.537 fused_ordering(469) 00:17:24.537 fused_ordering(470) 00:17:24.537 fused_ordering(471) 00:17:24.537 fused_ordering(472) 00:17:24.537 fused_ordering(473) 00:17:24.537 fused_ordering(474) 00:17:24.537 fused_ordering(475) 00:17:24.537 fused_ordering(476) 00:17:24.537 fused_ordering(477) 00:17:24.537 fused_ordering(478) 00:17:24.537 fused_ordering(479) 00:17:24.537 fused_ordering(480) 00:17:24.537 fused_ordering(481) 00:17:24.537 fused_ordering(482) 00:17:24.537 fused_ordering(483) 00:17:24.537 fused_ordering(484) 00:17:24.537 fused_ordering(485) 00:17:24.537 fused_ordering(486) 00:17:24.537 fused_ordering(487) 00:17:24.537 fused_ordering(488) 00:17:24.537 fused_ordering(489) 00:17:24.537 fused_ordering(490) 00:17:24.537 fused_ordering(491) 00:17:24.537 fused_ordering(492) 00:17:24.537 fused_ordering(493) 00:17:24.537 fused_ordering(494) 00:17:24.537 fused_ordering(495) 00:17:24.537 fused_ordering(496) 00:17:24.537 fused_ordering(497) 00:17:24.537 fused_ordering(498) 00:17:24.537 fused_ordering(499) 00:17:24.537 fused_ordering(500) 00:17:24.537 fused_ordering(501) 00:17:24.537 fused_ordering(502) 00:17:24.537 fused_ordering(503) 00:17:24.537 fused_ordering(504) 00:17:24.537 fused_ordering(505) 00:17:24.537 fused_ordering(506) 00:17:24.537 fused_ordering(507) 00:17:24.537 fused_ordering(508) 00:17:24.537 fused_ordering(509) 00:17:24.537 fused_ordering(510) 00:17:24.537 fused_ordering(511) 00:17:24.537 fused_ordering(512) 00:17:24.537 fused_ordering(513) 00:17:24.537 fused_ordering(514) 00:17:24.537 fused_ordering(515) 00:17:24.537 fused_ordering(516) 00:17:24.537 fused_ordering(517) 00:17:24.537 fused_ordering(518) 00:17:24.537 fused_ordering(519) 00:17:24.537 fused_ordering(520) 00:17:24.537 fused_ordering(521) 00:17:24.537 fused_ordering(522) 00:17:24.537 fused_ordering(523) 00:17:24.537 fused_ordering(524) 00:17:24.537 fused_ordering(525) 00:17:24.537 fused_ordering(526) 00:17:24.537 fused_ordering(527) 00:17:24.537 fused_ordering(528) 00:17:24.537 fused_ordering(529) 00:17:24.537 fused_ordering(530) 00:17:24.537 fused_ordering(531) 00:17:24.537 fused_ordering(532) 00:17:24.538 fused_ordering(533) 00:17:24.538 fused_ordering(534) 00:17:24.538 fused_ordering(535) 00:17:24.538 fused_ordering(536) 00:17:24.538 fused_ordering(537) 00:17:24.538 fused_ordering(538) 00:17:24.538 fused_ordering(539) 00:17:24.538 fused_ordering(540) 00:17:24.538 fused_ordering(541) 00:17:24.538 fused_ordering(542) 00:17:24.538 fused_ordering(543) 00:17:24.538 fused_ordering(544) 00:17:24.538 fused_ordering(545) 00:17:24.538 fused_ordering(546) 00:17:24.538 fused_ordering(547) 00:17:24.538 fused_ordering(548) 00:17:24.538 fused_ordering(549) 00:17:24.538 fused_ordering(550) 00:17:24.538 fused_ordering(551) 00:17:24.538 fused_ordering(552) 00:17:24.538 fused_ordering(553) 00:17:24.538 fused_ordering(554) 00:17:24.538 fused_ordering(555) 00:17:24.538 fused_ordering(556) 00:17:24.538 fused_ordering(557) 00:17:24.538 fused_ordering(558) 00:17:24.538 fused_ordering(559) 00:17:24.538 fused_ordering(560) 00:17:24.538 fused_ordering(561) 00:17:24.538 fused_ordering(562) 00:17:24.538 fused_ordering(563) 00:17:24.538 fused_ordering(564) 00:17:24.538 fused_ordering(565) 00:17:24.538 fused_ordering(566) 00:17:24.538 fused_ordering(567) 00:17:24.538 fused_ordering(568) 00:17:24.538 fused_ordering(569) 00:17:24.538 fused_ordering(570) 00:17:24.538 fused_ordering(571) 00:17:24.538 fused_ordering(572) 00:17:24.538 fused_ordering(573) 00:17:24.538 fused_ordering(574) 00:17:24.538 fused_ordering(575) 00:17:24.538 fused_ordering(576) 00:17:24.538 fused_ordering(577) 00:17:24.538 fused_ordering(578) 00:17:24.538 fused_ordering(579) 00:17:24.538 fused_ordering(580) 00:17:24.538 fused_ordering(581) 00:17:24.538 fused_ordering(582) 00:17:24.538 fused_ordering(583) 00:17:24.538 fused_ordering(584) 00:17:24.538 fused_ordering(585) 00:17:24.538 fused_ordering(586) 00:17:24.538 fused_ordering(587) 00:17:24.538 fused_ordering(588) 00:17:24.538 fused_ordering(589) 00:17:24.538 fused_ordering(590) 00:17:24.538 fused_ordering(591) 00:17:24.538 fused_ordering(592) 00:17:24.538 fused_ordering(593) 00:17:24.538 fused_ordering(594) 00:17:24.538 fused_ordering(595) 00:17:24.538 fused_ordering(596) 00:17:24.538 fused_ordering(597) 00:17:24.538 fused_ordering(598) 00:17:24.538 fused_ordering(599) 00:17:24.538 fused_ordering(600) 00:17:24.538 fused_ordering(601) 00:17:24.538 fused_ordering(602) 00:17:24.538 fused_ordering(603) 00:17:24.538 fused_ordering(604) 00:17:24.538 fused_ordering(605) 00:17:24.538 fused_ordering(606) 00:17:24.538 fused_ordering(607) 00:17:24.538 fused_ordering(608) 00:17:24.538 fused_ordering(609) 00:17:24.538 fused_ordering(610) 00:17:24.538 fused_ordering(611) 00:17:24.538 fused_ordering(612) 00:17:24.538 fused_ordering(613) 00:17:24.538 fused_ordering(614) 00:17:24.538 fused_ordering(615) 00:17:24.798 fused_ordering(616) 00:17:24.798 fused_ordering(617) 00:17:24.798 fused_ordering(618) 00:17:24.798 fused_ordering(619) 00:17:24.798 fused_ordering(620) 00:17:24.798 fused_ordering(621) 00:17:24.798 fused_ordering(622) 00:17:24.798 fused_ordering(623) 00:17:24.798 fused_ordering(624) 00:17:24.798 fused_ordering(625) 00:17:24.798 fused_ordering(626) 00:17:24.798 fused_ordering(627) 00:17:24.798 fused_ordering(628) 00:17:24.798 fused_ordering(629) 00:17:24.798 fused_ordering(630) 00:17:24.798 fused_ordering(631) 00:17:24.798 fused_ordering(632) 00:17:24.798 fused_ordering(633) 00:17:24.798 fused_ordering(634) 00:17:24.798 fused_ordering(635) 00:17:24.798 fused_ordering(636) 00:17:24.798 fused_ordering(637) 00:17:24.798 fused_ordering(638) 00:17:24.798 fused_ordering(639) 00:17:24.798 fused_ordering(640) 00:17:24.798 fused_ordering(641) 00:17:24.798 fused_ordering(642) 00:17:24.798 fused_ordering(643) 00:17:24.798 fused_ordering(644) 00:17:24.798 fused_ordering(645) 00:17:24.798 fused_ordering(646) 00:17:24.798 fused_ordering(647) 00:17:24.798 fused_ordering(648) 00:17:24.798 fused_ordering(649) 00:17:24.798 fused_ordering(650) 00:17:24.798 fused_ordering(651) 00:17:24.798 fused_ordering(652) 00:17:24.798 fused_ordering(653) 00:17:24.798 fused_ordering(654) 00:17:24.798 fused_ordering(655) 00:17:24.798 fused_ordering(656) 00:17:24.798 fused_ordering(657) 00:17:24.798 fused_ordering(658) 00:17:24.798 fused_ordering(659) 00:17:24.798 fused_ordering(660) 00:17:24.798 fused_ordering(661) 00:17:24.798 fused_ordering(662) 00:17:24.798 fused_ordering(663) 00:17:24.798 fused_ordering(664) 00:17:24.798 fused_ordering(665) 00:17:24.798 fused_ordering(666) 00:17:24.798 fused_ordering(667) 00:17:24.798 fused_ordering(668) 00:17:24.798 fused_ordering(669) 00:17:24.798 fused_ordering(670) 00:17:24.798 fused_ordering(671) 00:17:24.798 fused_ordering(672) 00:17:24.798 fused_ordering(673) 00:17:24.798 fused_ordering(674) 00:17:24.798 fused_ordering(675) 00:17:24.798 fused_ordering(676) 00:17:24.798 fused_ordering(677) 00:17:24.798 fused_ordering(678) 00:17:24.798 fused_ordering(679) 00:17:24.798 fused_ordering(680) 00:17:24.798 fused_ordering(681) 00:17:24.798 fused_ordering(682) 00:17:24.798 fused_ordering(683) 00:17:24.798 fused_ordering(684) 00:17:24.798 fused_ordering(685) 00:17:24.799 fused_ordering(686) 00:17:24.799 fused_ordering(687) 00:17:24.799 fused_ordering(688) 00:17:24.799 fused_ordering(689) 00:17:24.799 fused_ordering(690) 00:17:24.799 fused_ordering(691) 00:17:24.799 fused_ordering(692) 00:17:24.799 fused_ordering(693) 00:17:24.799 fused_ordering(694) 00:17:24.799 fused_ordering(695) 00:17:24.799 fused_ordering(696) 00:17:24.799 fused_ordering(697) 00:17:24.799 fused_ordering(698) 00:17:24.799 fused_ordering(699) 00:17:24.799 fused_ordering(700) 00:17:24.799 fused_ordering(701) 00:17:24.799 fused_ordering(702) 00:17:24.799 fused_ordering(703) 00:17:24.799 fused_ordering(704) 00:17:24.799 fused_ordering(705) 00:17:24.799 fused_ordering(706) 00:17:24.799 fused_ordering(707) 00:17:24.799 fused_ordering(708) 00:17:24.799 fused_ordering(709) 00:17:24.799 fused_ordering(710) 00:17:24.799 fused_ordering(711) 00:17:24.799 fused_ordering(712) 00:17:24.799 fused_ordering(713) 00:17:24.799 fused_ordering(714) 00:17:24.799 fused_ordering(715) 00:17:24.799 fused_ordering(716) 00:17:24.799 fused_ordering(717) 00:17:24.799 fused_ordering(718) 00:17:24.799 fused_ordering(719) 00:17:24.799 fused_ordering(720) 00:17:24.799 fused_ordering(721) 00:17:24.799 fused_ordering(722) 00:17:24.799 fused_ordering(723) 00:17:24.799 fused_ordering(724) 00:17:24.799 fused_ordering(725) 00:17:24.799 fused_ordering(726) 00:17:24.799 fused_ordering(727) 00:17:24.799 fused_ordering(728) 00:17:24.799 fused_ordering(729) 00:17:24.799 fused_ordering(730) 00:17:24.799 fused_ordering(731) 00:17:24.799 fused_ordering(732) 00:17:24.799 fused_ordering(733) 00:17:24.799 fused_ordering(734) 00:17:24.799 fused_ordering(735) 00:17:24.799 fused_ordering(736) 00:17:24.799 fused_ordering(737) 00:17:24.799 fused_ordering(738) 00:17:24.799 fused_ordering(739) 00:17:24.799 fused_ordering(740) 00:17:24.799 fused_ordering(741) 00:17:24.799 fused_ordering(742) 00:17:24.799 fused_ordering(743) 00:17:24.799 fused_ordering(744) 00:17:24.799 fused_ordering(745) 00:17:24.799 fused_ordering(746) 00:17:24.799 fused_ordering(747) 00:17:24.799 fused_ordering(748) 00:17:24.799 fused_ordering(749) 00:17:24.799 fused_ordering(750) 00:17:24.799 fused_ordering(751) 00:17:24.799 fused_ordering(752) 00:17:24.799 fused_ordering(753) 00:17:24.799 fused_ordering(754) 00:17:24.799 fused_ordering(755) 00:17:24.799 fused_ordering(756) 00:17:24.799 fused_ordering(757) 00:17:24.799 fused_ordering(758) 00:17:24.799 fused_ordering(759) 00:17:24.799 fused_ordering(760) 00:17:24.799 fused_ordering(761) 00:17:24.799 fused_ordering(762) 00:17:24.799 fused_ordering(763) 00:17:24.799 fused_ordering(764) 00:17:24.799 fused_ordering(765) 00:17:24.799 fused_ordering(766) 00:17:24.799 fused_ordering(767) 00:17:24.799 fused_ordering(768) 00:17:24.799 fused_ordering(769) 00:17:24.799 fused_ordering(770) 00:17:24.799 fused_ordering(771) 00:17:24.799 fused_ordering(772) 00:17:24.799 fused_ordering(773) 00:17:24.799 fused_ordering(774) 00:17:24.799 fused_ordering(775) 00:17:24.799 fused_ordering(776) 00:17:24.799 fused_ordering(777) 00:17:24.799 fused_ordering(778) 00:17:24.799 fused_ordering(779) 00:17:24.799 fused_ordering(780) 00:17:24.799 fused_ordering(781) 00:17:24.799 fused_ordering(782) 00:17:24.799 fused_ordering(783) 00:17:24.799 fused_ordering(784) 00:17:24.799 fused_ordering(785) 00:17:24.799 fused_ordering(786) 00:17:24.799 fused_ordering(787) 00:17:24.799 fused_ordering(788) 00:17:24.799 fused_ordering(789) 00:17:24.799 fused_ordering(790) 00:17:24.799 fused_ordering(791) 00:17:24.799 fused_ordering(792) 00:17:24.799 fused_ordering(793) 00:17:24.799 fused_ordering(794) 00:17:24.799 fused_ordering(795) 00:17:24.799 fused_ordering(796) 00:17:24.799 fused_ordering(797) 00:17:24.799 fused_ordering(798) 00:17:24.799 fused_ordering(799) 00:17:24.799 fused_ordering(800) 00:17:24.799 fused_ordering(801) 00:17:24.799 fused_ordering(802) 00:17:24.799 fused_ordering(803) 00:17:24.799 fused_ordering(804) 00:17:24.799 fused_ordering(805) 00:17:24.799 fused_ordering(806) 00:17:24.799 fused_ordering(807) 00:17:24.799 fused_ordering(808) 00:17:24.799 fused_ordering(809) 00:17:24.799 fused_ordering(810) 00:17:24.799 fused_ordering(811) 00:17:24.799 fused_ordering(812) 00:17:24.799 fused_ordering(813) 00:17:24.799 fused_ordering(814) 00:17:24.799 fused_ordering(815) 00:17:24.799 fused_ordering(816) 00:17:24.799 fused_ordering(817) 00:17:24.799 fused_ordering(818) 00:17:24.799 fused_ordering(819) 00:17:24.799 fused_ordering(820) 00:17:24.799 fused_ordering(821) 00:17:24.799 fused_ordering(822) 00:17:24.799 fused_ordering(823) 00:17:24.799 fused_ordering(824) 00:17:24.799 fused_ordering(825) 00:17:24.799 fused_ordering(826) 00:17:24.799 fused_ordering(827) 00:17:24.799 fused_ordering(828) 00:17:24.799 fused_ordering(829) 00:17:24.799 fused_ordering(830) 00:17:24.799 fused_ordering(831) 00:17:24.799 fused_ordering(832) 00:17:24.799 fused_ordering(833) 00:17:24.799 fused_ordering(834) 00:17:24.799 fused_ordering(835) 00:17:24.799 fused_ordering(836) 00:17:24.799 fused_ordering(837) 00:17:24.799 fused_ordering(838) 00:17:24.799 fused_ordering(839) 00:17:24.799 fused_ordering(840) 00:17:24.799 fused_ordering(841) 00:17:24.799 fused_ordering(842) 00:17:24.799 fused_ordering(843) 00:17:24.799 fused_ordering(844) 00:17:24.799 fused_ordering(845) 00:17:24.799 fused_ordering(846) 00:17:24.799 fused_ordering(847) 00:17:24.799 fused_ordering(848) 00:17:24.799 fused_ordering(849) 00:17:24.799 fused_ordering(850) 00:17:24.799 fused_ordering(851) 00:17:24.799 fused_ordering(852) 00:17:24.799 fused_ordering(853) 00:17:24.799 fused_ordering(854) 00:17:24.799 fused_ordering(855) 00:17:24.799 fused_ordering(856) 00:17:24.799 fused_ordering(857) 00:17:24.799 fused_ordering(858) 00:17:24.799 fused_ordering(859) 00:17:24.799 fused_ordering(860) 00:17:24.799 fused_ordering(861) 00:17:24.799 fused_ordering(862) 00:17:24.799 fused_ordering(863) 00:17:24.799 fused_ordering(864) 00:17:24.799 fused_ordering(865) 00:17:24.799 fused_ordering(866) 00:17:24.799 fused_ordering(867) 00:17:24.799 fused_ordering(868) 00:17:24.799 fused_ordering(869) 00:17:24.799 fused_ordering(870) 00:17:24.799 fused_ordering(871) 00:17:24.799 fused_ordering(872) 00:17:24.799 fused_ordering(873) 00:17:24.799 fused_ordering(874) 00:17:24.799 fused_ordering(875) 00:17:24.799 fused_ordering(876) 00:17:24.799 fused_ordering(877) 00:17:24.799 fused_ordering(878) 00:17:24.799 fused_ordering(879) 00:17:24.799 fused_ordering(880) 00:17:24.799 fused_ordering(881) 00:17:24.799 fused_ordering(882) 00:17:24.799 fused_ordering(883) 00:17:24.799 fused_ordering(884) 00:17:24.799 fused_ordering(885) 00:17:24.799 fused_ordering(886) 00:17:24.799 fused_ordering(887) 00:17:24.799 fused_ordering(888) 00:17:24.799 fused_ordering(889) 00:17:24.799 fused_ordering(890) 00:17:24.799 fused_ordering(891) 00:17:24.799 fused_ordering(892) 00:17:24.799 fused_ordering(893) 00:17:24.799 fused_ordering(894) 00:17:24.799 fused_ordering(895) 00:17:24.799 fused_ordering(896) 00:17:24.799 fused_ordering(897) 00:17:24.799 fused_ordering(898) 00:17:24.799 fused_ordering(899) 00:17:24.799 fused_ordering(900) 00:17:24.799 fused_ordering(901) 00:17:24.799 fused_ordering(902) 00:17:24.799 fused_ordering(903) 00:17:24.799 fused_ordering(904) 00:17:24.799 fused_ordering(905) 00:17:24.799 fused_ordering(906) 00:17:24.799 fused_ordering(907) 00:17:24.799 fused_ordering(908) 00:17:24.799 fused_ordering(909) 00:17:24.799 fused_ordering(910) 00:17:24.799 fused_ordering(911) 00:17:24.799 fused_ordering(912) 00:17:24.799 fused_ordering(913) 00:17:24.799 fused_ordering(914) 00:17:24.799 fused_ordering(915) 00:17:24.799 fused_ordering(916) 00:17:24.799 fused_ordering(917) 00:17:24.799 fused_ordering(918) 00:17:24.799 fused_ordering(919) 00:17:24.799 fused_ordering(920) 00:17:24.799 fused_ordering(921) 00:17:24.799 fused_ordering(922) 00:17:24.799 fused_ordering(923) 00:17:24.799 fused_ordering(924) 00:17:24.799 fused_ordering(925) 00:17:24.799 fused_ordering(926) 00:17:24.799 fused_ordering(927) 00:17:24.799 fused_ordering(928) 00:17:24.799 fused_ordering(929) 00:17:24.799 fused_ordering(930) 00:17:24.799 fused_ordering(931) 00:17:24.799 fused_ordering(932) 00:17:24.799 fused_ordering(933) 00:17:24.799 fused_ordering(934) 00:17:24.799 fused_ordering(935) 00:17:24.799 fused_ordering(936) 00:17:24.799 fused_ordering(937) 00:17:24.799 fused_ordering(938) 00:17:24.799 fused_ordering(939) 00:17:24.799 fused_ordering(940) 00:17:24.799 fused_ordering(941) 00:17:24.799 fused_ordering(942) 00:17:24.799 fused_ordering(943) 00:17:24.799 fused_ordering(944) 00:17:24.799 fused_ordering(945) 00:17:24.799 fused_ordering(946) 00:17:24.799 fused_ordering(947) 00:17:24.799 fused_ordering(948) 00:17:24.799 fused_ordering(949) 00:17:24.799 fused_ordering(950) 00:17:24.799 fused_ordering(951) 00:17:24.799 fused_ordering(952) 00:17:24.799 fused_ordering(953) 00:17:24.799 fused_ordering(954) 00:17:24.799 fused_ordering(955) 00:17:24.799 fused_ordering(956) 00:17:24.800 fused_ordering(957) 00:17:24.800 fused_ordering(958) 00:17:24.800 fused_ordering(959) 00:17:24.800 fused_ordering(960) 00:17:24.800 fused_ordering(961) 00:17:24.800 fused_ordering(962) 00:17:24.800 fused_ordering(963) 00:17:24.800 fused_ordering(964) 00:17:24.800 fused_ordering(965) 00:17:24.800 fused_ordering(966) 00:17:24.800 fused_ordering(967) 00:17:24.800 fused_ordering(968) 00:17:24.800 fused_ordering(969) 00:17:24.800 fused_ordering(970) 00:17:24.800 fused_ordering(971) 00:17:24.800 fused_ordering(972) 00:17:24.800 fused_ordering(973) 00:17:24.800 fused_ordering(974) 00:17:24.800 fused_ordering(975) 00:17:24.800 fused_ordering(976) 00:17:24.800 fused_ordering(977) 00:17:24.800 fused_ordering(978) 00:17:24.800 fused_ordering(979) 00:17:24.800 fused_ordering(980) 00:17:24.800 fused_ordering(981) 00:17:24.800 fused_ordering(982) 00:17:24.800 fused_ordering(983) 00:17:24.800 fused_ordering(984) 00:17:24.800 fused_ordering(985) 00:17:24.800 fused_ordering(986) 00:17:24.800 fused_ordering(987) 00:17:24.800 fused_ordering(988) 00:17:24.800 fused_ordering(989) 00:17:24.800 fused_ordering(990) 00:17:24.800 fused_ordering(991) 00:17:24.800 fused_ordering(992) 00:17:24.800 fused_ordering(993) 00:17:24.800 fused_ordering(994) 00:17:24.800 fused_ordering(995) 00:17:24.800 fused_ordering(996) 00:17:24.800 fused_ordering(997) 00:17:24.800 fused_ordering(998) 00:17:24.800 fused_ordering(999) 00:17:24.800 fused_ordering(1000) 00:17:24.800 fused_ordering(1001) 00:17:24.800 fused_ordering(1002) 00:17:24.800 fused_ordering(1003) 00:17:24.800 fused_ordering(1004) 00:17:24.800 fused_ordering(1005) 00:17:24.800 fused_ordering(1006) 00:17:24.800 fused_ordering(1007) 00:17:24.800 fused_ordering(1008) 00:17:24.800 fused_ordering(1009) 00:17:24.800 fused_ordering(1010) 00:17:24.800 fused_ordering(1011) 00:17:24.800 fused_ordering(1012) 00:17:24.800 fused_ordering(1013) 00:17:24.800 fused_ordering(1014) 00:17:24.800 fused_ordering(1015) 00:17:24.800 fused_ordering(1016) 00:17:24.800 fused_ordering(1017) 00:17:24.800 fused_ordering(1018) 00:17:24.800 fused_ordering(1019) 00:17:24.800 fused_ordering(1020) 00:17:24.800 fused_ordering(1021) 00:17:24.800 fused_ordering(1022) 00:17:24.800 fused_ordering(1023) 00:17:24.800 21:20:59 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:24.800 21:20:59 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:24.800 21:20:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:24.800 21:20:59 -- nvmf/common.sh@116 -- # sync 00:17:24.800 21:20:59 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:24.800 21:20:59 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:24.800 21:20:59 -- nvmf/common.sh@119 -- # set +e 00:17:24.800 21:20:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:24.800 21:20:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:24.800 rmmod nvme_rdma 00:17:25.059 rmmod nvme_fabrics 00:17:25.059 21:20:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:25.059 21:20:59 -- nvmf/common.sh@123 -- # set -e 00:17:25.059 21:20:59 -- nvmf/common.sh@124 -- # return 0 00:17:25.059 21:20:59 -- nvmf/common.sh@477 -- # '[' -n 1656443 ']' 00:17:25.059 21:20:59 -- nvmf/common.sh@478 -- # killprocess 1656443 00:17:25.059 21:20:59 -- common/autotest_common.sh@926 -- # '[' -z 1656443 ']' 00:17:25.059 21:20:59 -- common/autotest_common.sh@930 -- # kill -0 1656443 00:17:25.059 21:20:59 -- common/autotest_common.sh@931 -- # uname 00:17:25.060 21:20:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.060 21:20:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1656443 00:17:25.060 21:20:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:25.060 21:20:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:25.060 21:20:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1656443' 00:17:25.060 killing process with pid 1656443 00:17:25.060 21:20:59 -- common/autotest_common.sh@945 -- # kill 1656443 00:17:25.060 21:20:59 -- common/autotest_common.sh@950 -- # wait 1656443 00:17:25.318 21:20:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:25.318 21:20:59 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:25.318 00:17:25.318 real 0m10.258s 00:17:25.318 user 0m4.937s 00:17:25.318 sys 0m6.693s 00:17:25.318 21:20:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.318 21:20:59 -- common/autotest_common.sh@10 -- # set +x 00:17:25.318 ************************************ 00:17:25.318 END TEST nvmf_fused_ordering 00:17:25.318 ************************************ 00:17:25.318 21:21:00 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:25.318 21:21:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:25.318 21:21:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:25.318 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:17:25.318 ************************************ 00:17:25.318 START TEST nvmf_delete_subsystem 00:17:25.318 ************************************ 00:17:25.319 21:21:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:25.319 * Looking for test storage... 00:17:25.319 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:25.319 21:21:00 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.319 21:21:00 -- nvmf/common.sh@7 -- # uname -s 00:17:25.319 21:21:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.319 21:21:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.319 21:21:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.319 21:21:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.319 21:21:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.319 21:21:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.319 21:21:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.319 21:21:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.319 21:21:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.319 21:21:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.319 21:21:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:25.319 21:21:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:25.319 21:21:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.319 21:21:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.319 21:21:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.319 21:21:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:25.319 21:21:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.319 21:21:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.319 21:21:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.319 21:21:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.319 21:21:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.319 21:21:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.319 21:21:00 -- paths/export.sh@5 -- # export PATH 00:17:25.319 21:21:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.319 21:21:00 -- nvmf/common.sh@46 -- # : 0 00:17:25.319 21:21:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:25.319 21:21:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:25.319 21:21:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:25.319 21:21:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.319 21:21:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.319 21:21:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:25.319 21:21:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:25.319 21:21:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:25.319 21:21:00 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:25.319 21:21:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:25.319 21:21:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.319 21:21:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:25.319 21:21:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:25.319 21:21:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:25.319 21:21:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.319 21:21:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.319 21:21:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.319 21:21:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:25.319 21:21:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:25.319 21:21:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:25.319 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:17:33.507 21:21:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:33.507 21:21:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:33.507 21:21:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:33.507 21:21:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:33.507 21:21:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:33.507 21:21:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:33.507 21:21:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:33.507 21:21:08 -- nvmf/common.sh@294 -- # net_devs=() 00:17:33.507 21:21:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:33.507 21:21:08 -- nvmf/common.sh@295 -- # e810=() 00:17:33.507 21:21:08 -- nvmf/common.sh@295 -- # local -ga e810 00:17:33.507 21:21:08 -- nvmf/common.sh@296 -- # x722=() 00:17:33.507 21:21:08 -- nvmf/common.sh@296 -- # local -ga x722 00:17:33.507 21:21:08 -- nvmf/common.sh@297 -- # mlx=() 00:17:33.507 21:21:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:33.507 21:21:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.507 21:21:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.508 21:21:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:33.508 21:21:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:33.508 21:21:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:33.508 21:21:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:33.508 21:21:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:33.508 21:21:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:33.508 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:33.508 21:21:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.508 21:21:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:33.508 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:33.508 21:21:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.508 21:21:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:33.508 21:21:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.508 21:21:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.508 21:21:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.508 21:21:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:33.508 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.508 21:21:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.508 21:21:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.508 21:21:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.508 21:21:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:33.508 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.508 21:21:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:33.508 21:21:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:33.508 21:21:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:33.508 21:21:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:33.508 21:21:08 -- nvmf/common.sh@57 -- # uname 00:17:33.508 21:21:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:33.508 21:21:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:33.508 21:21:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:33.508 21:21:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:33.508 21:21:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:33.508 21:21:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:33.508 21:21:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:33.508 21:21:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:33.508 21:21:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:33.508 21:21:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.508 21:21:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:33.508 21:21:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.508 21:21:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:33.508 21:21:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:33.508 21:21:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.508 21:21:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:33.508 21:21:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@104 -- # continue 2 00:17:33.508 21:21:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@104 -- # continue 2 00:17:33.508 21:21:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:33.508 21:21:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.508 21:21:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:33.508 21:21:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:33.508 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.508 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:33.508 altname enp217s0f0np0 00:17:33.508 altname ens818f0np0 00:17:33.508 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.508 valid_lft forever preferred_lft forever 00:17:33.508 21:21:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:33.508 21:21:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.508 21:21:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:33.508 21:21:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:33.508 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.508 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:33.508 altname enp217s0f1np1 00:17:33.508 altname ens818f1np1 00:17:33.508 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.508 valid_lft forever preferred_lft forever 00:17:33.508 21:21:08 -- nvmf/common.sh@410 -- # return 0 00:17:33.508 21:21:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:33.508 21:21:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.508 21:21:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:33.508 21:21:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:33.508 21:21:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.508 21:21:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:33.508 21:21:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:33.508 21:21:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.508 21:21:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:33.508 21:21:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@104 -- # continue 2 00:17:33.508 21:21:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.508 21:21:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.508 21:21:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@104 -- # continue 2 00:17:33.508 21:21:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:33.508 21:21:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.508 21:21:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:33.508 21:21:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.508 21:21:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.508 21:21:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.508 192.168.100.9' 00:17:33.508 21:21:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:33.508 192.168.100.9' 00:17:33.508 21:21:08 -- nvmf/common.sh@445 -- # head -n 1 00:17:33.508 21:21:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.508 21:21:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:33.508 192.168.100.9' 00:17:33.508 21:21:08 -- nvmf/common.sh@446 -- # tail -n +2 00:17:33.508 21:21:08 -- nvmf/common.sh@446 -- # head -n 1 00:17:33.508 21:21:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.508 21:21:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:33.508 21:21:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.508 21:21:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:33.508 21:21:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:33.508 21:21:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:33.767 21:21:08 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:33.767 21:21:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.767 21:21:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:33.767 21:21:08 -- common/autotest_common.sh@10 -- # set +x 00:17:33.767 21:21:08 -- nvmf/common.sh@469 -- # nvmfpid=1661285 00:17:33.767 21:21:08 -- nvmf/common.sh@470 -- # waitforlisten 1661285 00:17:33.767 21:21:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:33.767 21:21:08 -- common/autotest_common.sh@819 -- # '[' -z 1661285 ']' 00:17:33.767 21:21:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.767 21:21:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.767 21:21:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.767 21:21:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.767 21:21:08 -- common/autotest_common.sh@10 -- # set +x 00:17:33.767 [2024-07-26 21:21:08.430884] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:33.767 [2024-07-26 21:21:08.430937] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.767 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.767 [2024-07-26 21:21:08.517242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:33.767 [2024-07-26 21:21:08.553280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.767 [2024-07-26 21:21:08.553409] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.767 [2024-07-26 21:21:08.553418] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.767 [2024-07-26 21:21:08.553427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.767 [2024-07-26 21:21:08.553482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.767 [2024-07-26 21:21:08.553485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.703 21:21:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.703 21:21:09 -- common/autotest_common.sh@852 -- # return 0 00:17:34.703 21:21:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.703 21:21:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.703 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 21:21:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.703 21:21:09 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:34.703 21:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.703 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 [2024-07-26 21:21:09.292851] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x87ba50/0x87ff40) succeed. 00:17:34.703 [2024-07-26 21:21:09.301589] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x87cf50/0x8c15d0) succeed. 00:17:34.703 21:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.703 21:21:09 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:34.703 21:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.703 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 21:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.703 21:21:09 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:34.703 21:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.703 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 [2024-07-26 21:21:09.383278] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:34.703 21:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.703 21:21:09 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:34.703 21:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.703 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 NULL1 00:17:34.703 21:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.703 21:21:09 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:34.703 21:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.703 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 Delay0 00:17:34.703 21:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.703 21:21:09 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:34.703 21:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:34.703 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:34.704 21:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:34.704 21:21:09 -- target/delete_subsystem.sh@28 -- # perf_pid=1661495 00:17:34.704 21:21:09 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:34.704 21:21:09 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:34.704 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.704 [2024-07-26 21:21:09.489930] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:36.610 21:21:11 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.610 21:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.610 21:21:11 -- common/autotest_common.sh@10 -- # set +x 00:17:37.988 NVMe io qpair process completion error 00:17:37.988 NVMe io qpair process completion error 00:17:37.988 NVMe io qpair process completion error 00:17:37.988 NVMe io qpair process completion error 00:17:37.988 NVMe io qpair process completion error 00:17:37.988 NVMe io qpair process completion error 00:17:37.988 21:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:37.988 21:21:12 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:37.988 21:21:12 -- target/delete_subsystem.sh@35 -- # kill -0 1661495 00:17:37.988 21:21:12 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:38.246 21:21:13 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:38.246 21:21:13 -- target/delete_subsystem.sh@35 -- # kill -0 1661495 00:17:38.247 21:21:13 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Write completed with error (sct=0, sc=8) 00:17:38.814 starting I/O failed: -6 00:17:38.814 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 starting I/O failed: -6 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Write completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 Read completed with error (sct=0, sc=8) 00:17:38.815 21:21:13 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:38.815 21:21:13 -- target/delete_subsystem.sh@35 -- # kill -0 1661495 00:17:38.815 21:21:13 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:38.815 [2024-07-26 21:21:13.590440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.815 [2024-07-26 21:21:13.590484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.815 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:38.815 Initializing NVMe Controllers 00:17:38.815 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.815 Controller IO queue size 128, less than required. 00:17:38.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.815 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:38.815 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:38.815 Initialization complete. Launching workers. 00:17:38.815 ======================================================== 00:17:38.815 Latency(us) 00:17:38.815 Device Information : IOPS MiB/s Average min max 00:17:38.815 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.50 0.04 1593614.58 1000081.58 2975871.10 00:17:38.815 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.50 0.04 1595166.61 1000332.55 2976863.84 00:17:38.815 ======================================================== 00:17:38.815 Total : 161.00 0.08 1594390.59 1000081.58 2976863.84 00:17:38.815 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@35 -- # kill -0 1661495 00:17:39.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1661495) - No such process 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@45 -- # NOT wait 1661495 00:17:39.383 21:21:14 -- common/autotest_common.sh@640 -- # local es=0 00:17:39.383 21:21:14 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1661495 00:17:39.383 21:21:14 -- common/autotest_common.sh@628 -- # local arg=wait 00:17:39.383 21:21:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.383 21:21:14 -- common/autotest_common.sh@632 -- # type -t wait 00:17:39.383 21:21:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.383 21:21:14 -- common/autotest_common.sh@643 -- # wait 1661495 00:17:39.383 21:21:14 -- common/autotest_common.sh@643 -- # es=1 00:17:39.383 21:21:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:39.383 21:21:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:39.383 21:21:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:39.383 21:21:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.383 21:21:14 -- common/autotest_common.sh@10 -- # set +x 00:17:39.383 21:21:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:39.383 21:21:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.383 21:21:14 -- common/autotest_common.sh@10 -- # set +x 00:17:39.383 [2024-07-26 21:21:14.108958] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:39.383 21:21:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:39.383 21:21:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.383 21:21:14 -- common/autotest_common.sh@10 -- # set +x 00:17:39.383 21:21:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@54 -- # perf_pid=1662312 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:39.383 21:21:14 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:39.383 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.383 [2024-07-26 21:21:14.194823] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:39.951 21:21:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:39.951 21:21:14 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:39.951 21:21:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:40.518 21:21:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:40.518 21:21:15 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:40.518 21:21:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:40.777 21:21:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:40.777 21:21:15 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:40.777 21:21:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:41.344 21:21:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:41.344 21:21:16 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:41.344 21:21:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:41.911 21:21:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:41.911 21:21:16 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:41.911 21:21:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:42.478 21:21:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:42.478 21:21:17 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:42.478 21:21:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:43.046 21:21:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:43.046 21:21:17 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:43.046 21:21:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:43.304 21:21:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:43.304 21:21:18 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:43.304 21:21:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:43.871 21:21:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:43.871 21:21:18 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:43.871 21:21:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:44.439 21:21:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:44.439 21:21:19 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:44.439 21:21:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:45.007 21:21:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:45.007 21:21:19 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:45.007 21:21:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:45.575 21:21:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:45.575 21:21:20 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:45.575 21:21:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:45.835 21:21:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:45.835 21:21:20 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:45.835 21:21:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:46.404 21:21:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:46.404 21:21:21 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:46.404 21:21:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:46.663 Initializing NVMe Controllers 00:17:46.664 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.664 Controller IO queue size 128, less than required. 00:17:46.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.664 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:46.664 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:46.664 Initialization complete. Launching workers. 00:17:46.664 ======================================================== 00:17:46.664 Latency(us) 00:17:46.664 Device Information : IOPS MiB/s Average min max 00:17:46.664 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001115.24 1000055.86 1003541.12 00:17:46.664 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002204.51 1000071.76 1005549.49 00:17:46.664 ======================================================== 00:17:46.664 Total : 256.00 0.12 1001659.87 1000055.86 1005549.49 00:17:46.664 00:17:46.923 21:21:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:46.923 21:21:21 -- target/delete_subsystem.sh@57 -- # kill -0 1662312 00:17:46.923 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1662312) - No such process 00:17:46.923 21:21:21 -- target/delete_subsystem.sh@67 -- # wait 1662312 00:17:46.923 21:21:21 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:46.923 21:21:21 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:46.923 21:21:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:46.923 21:21:21 -- nvmf/common.sh@116 -- # sync 00:17:46.923 21:21:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:46.923 21:21:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:46.923 21:21:21 -- nvmf/common.sh@119 -- # set +e 00:17:46.923 21:21:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:46.923 21:21:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:46.923 rmmod nvme_rdma 00:17:46.923 rmmod nvme_fabrics 00:17:46.923 21:21:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:46.923 21:21:21 -- nvmf/common.sh@123 -- # set -e 00:17:46.923 21:21:21 -- nvmf/common.sh@124 -- # return 0 00:17:46.923 21:21:21 -- nvmf/common.sh@477 -- # '[' -n 1661285 ']' 00:17:46.923 21:21:21 -- nvmf/common.sh@478 -- # killprocess 1661285 00:17:46.923 21:21:21 -- common/autotest_common.sh@926 -- # '[' -z 1661285 ']' 00:17:46.923 21:21:21 -- common/autotest_common.sh@930 -- # kill -0 1661285 00:17:46.923 21:21:21 -- common/autotest_common.sh@931 -- # uname 00:17:46.923 21:21:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.923 21:21:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1661285 00:17:47.182 21:21:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.182 21:21:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.183 21:21:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1661285' 00:17:47.183 killing process with pid 1661285 00:17:47.183 21:21:21 -- common/autotest_common.sh@945 -- # kill 1661285 00:17:47.183 21:21:21 -- common/autotest_common.sh@950 -- # wait 1661285 00:17:47.183 21:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:47.183 21:21:22 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:47.183 00:17:47.183 real 0m21.997s 00:17:47.183 user 0m50.389s 00:17:47.183 sys 0m7.502s 00:17:47.183 21:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.183 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:17:47.183 ************************************ 00:17:47.183 END TEST nvmf_delete_subsystem 00:17:47.183 ************************************ 00:17:47.442 21:21:22 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:47.442 21:21:22 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:47.442 21:21:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:47.442 21:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:47.442 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:17:47.442 ************************************ 00:17:47.442 START TEST nvmf_nvme_cli 00:17:47.442 ************************************ 00:17:47.442 21:21:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:47.442 * Looking for test storage... 00:17:47.442 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:47.442 21:21:22 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.442 21:21:22 -- nvmf/common.sh@7 -- # uname -s 00:17:47.442 21:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.442 21:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.442 21:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.442 21:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.442 21:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.442 21:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.442 21:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.442 21:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.442 21:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.442 21:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.442 21:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:47.442 21:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:47.443 21:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.443 21:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.443 21:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.443 21:21:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:47.443 21:21:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.443 21:21:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.443 21:21:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.443 21:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.443 21:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.443 21:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.443 21:21:22 -- paths/export.sh@5 -- # export PATH 00:17:47.443 21:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.443 21:21:22 -- nvmf/common.sh@46 -- # : 0 00:17:47.443 21:21:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:47.443 21:21:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:47.443 21:21:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:47.443 21:21:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.443 21:21:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.443 21:21:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:47.443 21:21:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:47.443 21:21:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:47.443 21:21:22 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.443 21:21:22 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.443 21:21:22 -- target/nvme_cli.sh@14 -- # devs=() 00:17:47.443 21:21:22 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:47.443 21:21:22 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:47.443 21:21:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.443 21:21:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:47.443 21:21:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:47.443 21:21:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:47.443 21:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.443 21:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.443 21:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.443 21:21:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:47.443 21:21:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:47.443 21:21:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:47.443 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:17:55.611 21:21:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:55.611 21:21:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:55.611 21:21:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:55.611 21:21:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:55.611 21:21:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:55.611 21:21:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:55.611 21:21:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:55.611 21:21:30 -- nvmf/common.sh@294 -- # net_devs=() 00:17:55.611 21:21:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:55.611 21:21:30 -- nvmf/common.sh@295 -- # e810=() 00:17:55.611 21:21:30 -- nvmf/common.sh@295 -- # local -ga e810 00:17:55.611 21:21:30 -- nvmf/common.sh@296 -- # x722=() 00:17:55.611 21:21:30 -- nvmf/common.sh@296 -- # local -ga x722 00:17:55.611 21:21:30 -- nvmf/common.sh@297 -- # mlx=() 00:17:55.611 21:21:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:55.611 21:21:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.611 21:21:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:55.611 21:21:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:55.611 21:21:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:55.611 21:21:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:55.611 21:21:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:55.611 21:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.611 21:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:55.611 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:55.611 21:21:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:55.611 21:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.611 21:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:55.611 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:55.611 21:21:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:55.611 21:21:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:55.611 21:21:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.611 21:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.611 21:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.611 21:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.611 21:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:55.611 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:55.611 21:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.611 21:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.611 21:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.611 21:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.611 21:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.611 21:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:55.611 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:55.611 21:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.611 21:21:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:55.611 21:21:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:55.611 21:21:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:55.611 21:21:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:55.611 21:21:30 -- nvmf/common.sh@57 -- # uname 00:17:55.611 21:21:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:55.611 21:21:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:55.611 21:21:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:55.611 21:21:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:55.611 21:21:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:55.611 21:21:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:55.611 21:21:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:55.611 21:21:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:55.611 21:21:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:55.611 21:21:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:55.611 21:21:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:55.611 21:21:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:55.611 21:21:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:55.611 21:21:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:55.611 21:21:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:55.611 21:21:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:55.611 21:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:55.611 21:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:55.611 21:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:55.611 21:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:55.612 21:21:30 -- nvmf/common.sh@104 -- # continue 2 00:17:55.612 21:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:55.612 21:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:55.612 21:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:55.612 21:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:55.612 21:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:55.612 21:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:55.612 21:21:30 -- nvmf/common.sh@104 -- # continue 2 00:17:55.612 21:21:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:55.612 21:21:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:55.612 21:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:55.612 21:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:55.612 21:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:55.612 21:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:55.612 21:21:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:55.612 21:21:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:55.612 21:21:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:55.612 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:55.612 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:55.612 altname enp217s0f0np0 00:17:55.612 altname ens818f0np0 00:17:55.612 inet 192.168.100.8/24 scope global mlx_0_0 00:17:55.612 valid_lft forever preferred_lft forever 00:17:55.612 21:21:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:55.612 21:21:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:55.612 21:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:55.612 21:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:55.612 21:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:55.612 21:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:55.612 21:21:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:55.612 21:21:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:55.612 21:21:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:55.612 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:55.612 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:55.612 altname enp217s0f1np1 00:17:55.612 altname ens818f1np1 00:17:55.612 inet 192.168.100.9/24 scope global mlx_0_1 00:17:55.612 valid_lft forever preferred_lft forever 00:17:55.612 21:21:30 -- nvmf/common.sh@410 -- # return 0 00:17:55.612 21:21:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:55.612 21:21:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:55.612 21:21:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:55.612 21:21:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:55.612 21:21:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:55.612 21:21:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:55.612 21:21:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:55.612 21:21:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:55.612 21:21:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:55.872 21:21:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:55.872 21:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:55.872 21:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:55.872 21:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:55.872 21:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:55.872 21:21:30 -- nvmf/common.sh@104 -- # continue 2 00:17:55.872 21:21:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:55.872 21:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:55.872 21:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:55.872 21:21:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:55.872 21:21:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:55.872 21:21:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:55.872 21:21:30 -- nvmf/common.sh@104 -- # continue 2 00:17:55.872 21:21:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:55.872 21:21:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:55.872 21:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:55.872 21:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:55.872 21:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:55.872 21:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:55.872 21:21:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:55.872 21:21:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:55.872 21:21:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:55.872 21:21:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:55.872 21:21:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:55.872 21:21:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:55.872 21:21:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:55.872 192.168.100.9' 00:17:55.872 21:21:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:55.872 192.168.100.9' 00:17:55.872 21:21:30 -- nvmf/common.sh@445 -- # head -n 1 00:17:55.872 21:21:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:55.872 21:21:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:55.872 192.168.100.9' 00:17:55.872 21:21:30 -- nvmf/common.sh@446 -- # tail -n +2 00:17:55.872 21:21:30 -- nvmf/common.sh@446 -- # head -n 1 00:17:55.872 21:21:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:55.872 21:21:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:55.872 21:21:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:55.872 21:21:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:55.872 21:21:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:55.872 21:21:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:55.872 21:21:30 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:55.872 21:21:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:55.872 21:21:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:55.872 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:17:55.872 21:21:30 -- nvmf/common.sh@469 -- # nvmfpid=1667710 00:17:55.872 21:21:30 -- nvmf/common.sh@470 -- # waitforlisten 1667710 00:17:55.872 21:21:30 -- common/autotest_common.sh@819 -- # '[' -z 1667710 ']' 00:17:55.872 21:21:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.872 21:21:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:55.872 21:21:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.872 21:21:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:55.872 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:17:55.872 21:21:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:55.872 [2024-07-26 21:21:30.613945] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:55.872 [2024-07-26 21:21:30.613998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.872 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.872 [2024-07-26 21:21:30.701902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.872 [2024-07-26 21:21:30.740829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:55.872 [2024-07-26 21:21:30.740939] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.872 [2024-07-26 21:21:30.740952] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.872 [2024-07-26 21:21:30.740961] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.872 [2024-07-26 21:21:30.741004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.872 [2024-07-26 21:21:30.741027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.872 [2024-07-26 21:21:30.741124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.872 [2024-07-26 21:21:30.741125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.809 21:21:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.809 21:21:31 -- common/autotest_common.sh@852 -- # return 0 00:17:56.809 21:21:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:56.809 21:21:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 21:21:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.809 21:21:31 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 [2024-07-26 21:21:31.470076] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f03060/0x1f07550) succeed. 00:17:56.809 [2024-07-26 21:21:31.480428] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f04650/0x1f48be0) succeed. 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 Malloc0 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 Malloc1 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 [2024-07-26 21:21:31.665747] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:56.809 21:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.809 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:17:56.809 21:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.809 21:21:31 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:57.067 00:17:57.067 Discovery Log Number of Records 2, Generation counter 2 00:17:57.067 =====Discovery Log Entry 0====== 00:17:57.067 trtype: rdma 00:17:57.067 adrfam: ipv4 00:17:57.067 subtype: current discovery subsystem 00:17:57.067 treq: not required 00:17:57.067 portid: 0 00:17:57.067 trsvcid: 4420 00:17:57.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:57.067 traddr: 192.168.100.8 00:17:57.067 eflags: explicit discovery connections, duplicate discovery information 00:17:57.067 rdma_prtype: not specified 00:17:57.067 rdma_qptype: connected 00:17:57.067 rdma_cms: rdma-cm 00:17:57.067 rdma_pkey: 0x0000 00:17:57.067 =====Discovery Log Entry 1====== 00:17:57.067 trtype: rdma 00:17:57.067 adrfam: ipv4 00:17:57.067 subtype: nvme subsystem 00:17:57.067 treq: not required 00:17:57.067 portid: 0 00:17:57.067 trsvcid: 4420 00:17:57.067 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:57.067 traddr: 192.168.100.8 00:17:57.067 eflags: none 00:17:57.067 rdma_prtype: not specified 00:17:57.067 rdma_qptype: connected 00:17:57.067 rdma_cms: rdma-cm 00:17:57.067 rdma_pkey: 0x0000 00:17:57.067 21:21:31 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:57.067 21:21:31 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:57.067 21:21:31 -- nvmf/common.sh@510 -- # local dev _ 00:17:57.067 21:21:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:57.067 21:21:31 -- nvmf/common.sh@509 -- # nvme list 00:17:57.067 21:21:31 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:57.067 21:21:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:57.067 21:21:31 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:57.067 21:21:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:57.067 21:21:31 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:57.067 21:21:31 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:58.003 21:21:32 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:58.003 21:21:32 -- common/autotest_common.sh@1177 -- # local i=0 00:17:58.003 21:21:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.003 21:21:32 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:17:58.003 21:21:32 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:17:58.003 21:21:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:59.910 21:21:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:59.910 21:21:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:59.910 21:21:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:00.168 21:21:34 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:18:00.168 21:21:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.168 21:21:34 -- common/autotest_common.sh@1187 -- # return 0 00:18:00.168 21:21:34 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:00.168 21:21:34 -- nvmf/common.sh@510 -- # local dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@509 -- # nvme list 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:00.168 /dev/nvme0n1 ]] 00:18:00.168 21:21:34 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:00.168 21:21:34 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:00.168 21:21:34 -- nvmf/common.sh@510 -- # local dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@509 -- # nvme list 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:00.168 21:21:34 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:18:00.168 21:21:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:00.168 21:21:34 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:00.168 21:21:34 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.103 21:21:35 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:01.103 21:21:35 -- common/autotest_common.sh@1198 -- # local i=0 00:18:01.103 21:21:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:01.103 21:21:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.103 21:21:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.103 21:21:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:01.103 21:21:35 -- common/autotest_common.sh@1210 -- # return 0 00:18:01.103 21:21:35 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:01.103 21:21:35 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.103 21:21:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.103 21:21:35 -- common/autotest_common.sh@10 -- # set +x 00:18:01.103 21:21:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.103 21:21:35 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:01.103 21:21:35 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:01.103 21:21:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:01.103 21:21:35 -- nvmf/common.sh@116 -- # sync 00:18:01.103 21:21:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:01.103 21:21:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:01.103 21:21:35 -- nvmf/common.sh@119 -- # set +e 00:18:01.103 21:21:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:01.103 21:21:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:01.103 rmmod nvme_rdma 00:18:01.103 rmmod nvme_fabrics 00:18:01.103 21:21:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:01.103 21:21:35 -- nvmf/common.sh@123 -- # set -e 00:18:01.103 21:21:35 -- nvmf/common.sh@124 -- # return 0 00:18:01.103 21:21:35 -- nvmf/common.sh@477 -- # '[' -n 1667710 ']' 00:18:01.103 21:21:35 -- nvmf/common.sh@478 -- # killprocess 1667710 00:18:01.103 21:21:35 -- common/autotest_common.sh@926 -- # '[' -z 1667710 ']' 00:18:01.103 21:21:35 -- common/autotest_common.sh@930 -- # kill -0 1667710 00:18:01.103 21:21:35 -- common/autotest_common.sh@931 -- # uname 00:18:01.103 21:21:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:01.103 21:21:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1667710 00:18:01.103 21:21:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:01.103 21:21:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:01.103 21:21:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1667710' 00:18:01.104 killing process with pid 1667710 00:18:01.104 21:21:35 -- common/autotest_common.sh@945 -- # kill 1667710 00:18:01.104 21:21:35 -- common/autotest_common.sh@950 -- # wait 1667710 00:18:01.362 21:21:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:01.362 21:21:36 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:01.362 00:18:01.362 real 0m14.145s 00:18:01.362 user 0m24.064s 00:18:01.362 sys 0m6.970s 00:18:01.362 21:21:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.362 21:21:36 -- common/autotest_common.sh@10 -- # set +x 00:18:01.362 ************************************ 00:18:01.362 END TEST nvmf_nvme_cli 00:18:01.362 ************************************ 00:18:01.621 21:21:36 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:18:01.621 21:21:36 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:18:01.621 21:21:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:01.621 21:21:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:01.621 21:21:36 -- common/autotest_common.sh@10 -- # set +x 00:18:01.621 ************************************ 00:18:01.621 START TEST nvmf_host_management 00:18:01.621 ************************************ 00:18:01.621 21:21:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:18:01.621 * Looking for test storage... 00:18:01.621 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:01.621 21:21:36 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.621 21:21:36 -- nvmf/common.sh@7 -- # uname -s 00:18:01.621 21:21:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.621 21:21:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.621 21:21:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.621 21:21:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.621 21:21:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.621 21:21:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.621 21:21:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.621 21:21:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.621 21:21:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.621 21:21:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.621 21:21:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:01.621 21:21:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:01.621 21:21:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.621 21:21:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.621 21:21:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.621 21:21:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:01.621 21:21:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.621 21:21:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.621 21:21:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.621 21:21:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.621 21:21:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.622 21:21:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.622 21:21:36 -- paths/export.sh@5 -- # export PATH 00:18:01.622 21:21:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.622 21:21:36 -- nvmf/common.sh@46 -- # : 0 00:18:01.622 21:21:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.622 21:21:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.622 21:21:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.622 21:21:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.622 21:21:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.622 21:21:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.622 21:21:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.622 21:21:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.622 21:21:36 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.622 21:21:36 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.622 21:21:36 -- target/host_management.sh@104 -- # nvmftestinit 00:18:01.622 21:21:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:01.622 21:21:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.622 21:21:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.622 21:21:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.622 21:21:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.622 21:21:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.622 21:21:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.622 21:21:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.622 21:21:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:01.622 21:21:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:01.622 21:21:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:01.622 21:21:36 -- common/autotest_common.sh@10 -- # set +x 00:18:09.745 21:21:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:09.745 21:21:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:09.745 21:21:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:09.745 21:21:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:09.745 21:21:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:09.745 21:21:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:09.745 21:21:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:09.745 21:21:44 -- nvmf/common.sh@294 -- # net_devs=() 00:18:09.745 21:21:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:09.745 21:21:44 -- nvmf/common.sh@295 -- # e810=() 00:18:09.745 21:21:44 -- nvmf/common.sh@295 -- # local -ga e810 00:18:09.745 21:21:44 -- nvmf/common.sh@296 -- # x722=() 00:18:09.745 21:21:44 -- nvmf/common.sh@296 -- # local -ga x722 00:18:09.745 21:21:44 -- nvmf/common.sh@297 -- # mlx=() 00:18:09.745 21:21:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:09.745 21:21:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.745 21:21:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.746 21:21:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:09.746 21:21:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:09.746 21:21:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:09.746 21:21:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:09.746 21:21:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:09.746 21:21:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:09.746 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:09.746 21:21:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:09.746 21:21:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:09.746 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:09.746 21:21:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:09.746 21:21:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:09.746 21:21:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.746 21:21:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:09.746 21:21:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.746 21:21:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:09.746 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.746 21:21:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.746 21:21:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:09.746 21:21:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.746 21:21:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:09.746 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.746 21:21:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:09.746 21:21:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:09.746 21:21:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:09.746 21:21:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:09.746 21:21:44 -- nvmf/common.sh@57 -- # uname 00:18:09.746 21:21:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:09.746 21:21:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:09.746 21:21:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:09.746 21:21:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:09.746 21:21:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:09.746 21:21:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:09.746 21:21:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:09.746 21:21:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:09.746 21:21:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:09.746 21:21:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:09.746 21:21:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:09.746 21:21:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:09.746 21:21:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:09.746 21:21:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:09.746 21:21:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:09.746 21:21:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:09.746 21:21:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@104 -- # continue 2 00:18:09.746 21:21:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@104 -- # continue 2 00:18:09.746 21:21:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:09.746 21:21:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.746 21:21:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:09.746 21:21:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:09.746 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:09.746 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:09.746 altname enp217s0f0np0 00:18:09.746 altname ens818f0np0 00:18:09.746 inet 192.168.100.8/24 scope global mlx_0_0 00:18:09.746 valid_lft forever preferred_lft forever 00:18:09.746 21:21:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:09.746 21:21:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.746 21:21:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:09.746 21:21:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:09.746 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:09.746 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:09.746 altname enp217s0f1np1 00:18:09.746 altname ens818f1np1 00:18:09.746 inet 192.168.100.9/24 scope global mlx_0_1 00:18:09.746 valid_lft forever preferred_lft forever 00:18:09.746 21:21:44 -- nvmf/common.sh@410 -- # return 0 00:18:09.746 21:21:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:09.746 21:21:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:09.746 21:21:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:09.746 21:21:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:09.746 21:21:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:09.746 21:21:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:09.746 21:21:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:09.746 21:21:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:09.746 21:21:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:09.746 21:21:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@104 -- # continue 2 00:18:09.746 21:21:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.746 21:21:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:09.746 21:21:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@104 -- # continue 2 00:18:09.746 21:21:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:09.746 21:21:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.746 21:21:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:09.746 21:21:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.746 21:21:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.746 21:21:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:09.746 192.168.100.9' 00:18:09.746 21:21:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:09.746 192.168.100.9' 00:18:09.746 21:21:44 -- nvmf/common.sh@445 -- # head -n 1 00:18:09.746 21:21:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:09.746 21:21:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:09.746 192.168.100.9' 00:18:09.746 21:21:44 -- nvmf/common.sh@446 -- # tail -n +2 00:18:09.746 21:21:44 -- nvmf/common.sh@446 -- # head -n 1 00:18:09.746 21:21:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:09.746 21:21:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:09.746 21:21:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:09.747 21:21:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:09.747 21:21:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:09.747 21:21:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:09.747 21:21:44 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:18:09.747 21:21:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:09.747 21:21:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.747 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:18:09.747 ************************************ 00:18:09.747 START TEST nvmf_host_management 00:18:09.747 ************************************ 00:18:09.747 21:21:44 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:18:09.747 21:21:44 -- target/host_management.sh@69 -- # starttarget 00:18:09.747 21:21:44 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:09.747 21:21:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:09.747 21:21:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:09.747 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:18:09.747 21:21:44 -- nvmf/common.sh@469 -- # nvmfpid=1672651 00:18:09.747 21:21:44 -- nvmf/common.sh@470 -- # waitforlisten 1672651 00:18:09.747 21:21:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:09.747 21:21:44 -- common/autotest_common.sh@819 -- # '[' -z 1672651 ']' 00:18:09.747 21:21:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.747 21:21:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:09.747 21:21:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.747 21:21:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:09.747 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:18:09.747 [2024-07-26 21:21:44.607973] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:09.747 [2024-07-26 21:21:44.608032] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.007 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.007 [2024-07-26 21:21:44.696496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.007 [2024-07-26 21:21:44.734425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.007 [2024-07-26 21:21:44.734539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.007 [2024-07-26 21:21:44.734549] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.007 [2024-07-26 21:21:44.734558] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.007 [2024-07-26 21:21:44.734661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.007 [2024-07-26 21:21:44.734744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.007 [2024-07-26 21:21:44.734834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.007 [2024-07-26 21:21:44.734836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:10.575 21:21:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:10.575 21:21:45 -- common/autotest_common.sh@852 -- # return 0 00:18:10.575 21:21:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:10.575 21:21:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:10.575 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:18:10.834 21:21:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.834 21:21:45 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:10.834 21:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:10.834 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:18:10.834 [2024-07-26 21:21:45.478525] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x122d350/0x1231840) succeed. 00:18:10.834 [2024-07-26 21:21:45.488594] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x122e940/0x1272ed0) succeed. 00:18:10.834 21:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:10.834 21:21:45 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:10.834 21:21:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:10.834 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:18:10.834 21:21:45 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:10.834 21:21:45 -- target/host_management.sh@23 -- # cat 00:18:10.834 21:21:45 -- target/host_management.sh@30 -- # rpc_cmd 00:18:10.834 21:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:10.834 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:18:10.834 Malloc0 00:18:10.834 [2024-07-26 21:21:45.666601] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:10.834 21:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:10.834 21:21:45 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:10.834 21:21:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:10.834 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:18:11.093 21:21:45 -- target/host_management.sh@73 -- # perfpid=1672958 00:18:11.093 21:21:45 -- target/host_management.sh@74 -- # waitforlisten 1672958 /var/tmp/bdevperf.sock 00:18:11.093 21:21:45 -- common/autotest_common.sh@819 -- # '[' -z 1672958 ']' 00:18:11.093 21:21:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.093 21:21:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.093 21:21:45 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:11.093 21:21:45 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:11.093 21:21:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.094 21:21:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.094 21:21:45 -- nvmf/common.sh@520 -- # config=() 00:18:11.094 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:18:11.094 21:21:45 -- nvmf/common.sh@520 -- # local subsystem config 00:18:11.094 21:21:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:11.094 21:21:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:11.094 { 00:18:11.094 "params": { 00:18:11.094 "name": "Nvme$subsystem", 00:18:11.094 "trtype": "$TEST_TRANSPORT", 00:18:11.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.094 "adrfam": "ipv4", 00:18:11.094 "trsvcid": "$NVMF_PORT", 00:18:11.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.094 "hdgst": ${hdgst:-false}, 00:18:11.094 "ddgst": ${ddgst:-false} 00:18:11.094 }, 00:18:11.094 "method": "bdev_nvme_attach_controller" 00:18:11.094 } 00:18:11.094 EOF 00:18:11.094 )") 00:18:11.094 21:21:45 -- nvmf/common.sh@542 -- # cat 00:18:11.094 21:21:45 -- nvmf/common.sh@544 -- # jq . 00:18:11.094 21:21:45 -- nvmf/common.sh@545 -- # IFS=, 00:18:11.094 21:21:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:11.094 "params": { 00:18:11.094 "name": "Nvme0", 00:18:11.094 "trtype": "rdma", 00:18:11.094 "traddr": "192.168.100.8", 00:18:11.094 "adrfam": "ipv4", 00:18:11.094 "trsvcid": "4420", 00:18:11.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:11.094 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:11.094 "hdgst": false, 00:18:11.094 "ddgst": false 00:18:11.094 }, 00:18:11.094 "method": "bdev_nvme_attach_controller" 00:18:11.094 }' 00:18:11.094 [2024-07-26 21:21:45.765505] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:11.094 [2024-07-26 21:21:45.765559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672958 ] 00:18:11.094 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.094 [2024-07-26 21:21:45.851923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.094 [2024-07-26 21:21:45.888494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.352 Running I/O for 10 seconds... 00:18:11.919 21:21:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:11.919 21:21:46 -- common/autotest_common.sh@852 -- # return 0 00:18:11.919 21:21:46 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:11.919 21:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.919 21:21:46 -- common/autotest_common.sh@10 -- # set +x 00:18:11.919 21:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.919 21:21:46 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.919 21:21:46 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:11.919 21:21:46 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:11.919 21:21:46 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:11.919 21:21:46 -- target/host_management.sh@52 -- # local ret=1 00:18:11.919 21:21:46 -- target/host_management.sh@53 -- # local i 00:18:11.919 21:21:46 -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:11.919 21:21:46 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:11.919 21:21:46 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:11.919 21:21:46 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:11.919 21:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.919 21:21:46 -- common/autotest_common.sh@10 -- # set +x 00:18:11.919 21:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.919 21:21:46 -- target/host_management.sh@55 -- # read_io_count=2971 00:18:11.919 21:21:46 -- target/host_management.sh@58 -- # '[' 2971 -ge 100 ']' 00:18:11.919 21:21:46 -- target/host_management.sh@59 -- # ret=0 00:18:11.919 21:21:46 -- target/host_management.sh@60 -- # break 00:18:11.919 21:21:46 -- target/host_management.sh@64 -- # return 0 00:18:11.919 21:21:46 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:11.919 21:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.919 21:21:46 -- common/autotest_common.sh@10 -- # set +x 00:18:11.919 21:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.919 21:21:46 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:11.919 21:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.919 21:21:46 -- common/autotest_common.sh@10 -- # set +x 00:18:11.919 21:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.919 21:21:46 -- target/host_management.sh@87 -- # sleep 1 00:18:12.857 [2024-07-26 21:21:47.651615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:18:12.857 [2024-07-26 21:21:47.651654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:18:12.857 [2024-07-26 21:21:47.651682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:18:12.857 [2024-07-26 21:21:47.651703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:18:12.857 [2024-07-26 21:21:47.651724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:18:12.857 [2024-07-26 21:21:47.651744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:18:12.857 [2024-07-26 21:21:47.651764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:18:12.857 [2024-07-26 21:21:47.651784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:18:12.857 [2024-07-26 21:21:47.651809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:18:12.857 [2024-07-26 21:21:47.651829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:18:12.857 [2024-07-26 21:21:47.651849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:18:12.857 [2024-07-26 21:21:47.651869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:18:12.857 [2024-07-26 21:21:47.651890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:18:12.857 [2024-07-26 21:21:47.651910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:18:12.857 [2024-07-26 21:21:47.651930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:18:12.857 [2024-07-26 21:21:47.651950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:18:12.857 [2024-07-26 21:21:47.651970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.651980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:18:12.857 [2024-07-26 21:21:47.651990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.652001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:18:12.857 [2024-07-26 21:21:47.652010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.652021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:18:12.857 [2024-07-26 21:21:47.652033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.652044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:18:12.857 [2024-07-26 21:21:47.652053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.652063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:18:12.857 [2024-07-26 21:21:47.652072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.857 [2024-07-26 21:21:47.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:18:12.857 [2024-07-26 21:21:47.652092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:18:12.858 [2024-07-26 21:21:47.652152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:18:12.858 [2024-07-26 21:21:47.652172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:18:12.858 [2024-07-26 21:21:47.652192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:18:12.858 [2024-07-26 21:21:47.652211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:18:12.858 [2024-07-26 21:21:47.652272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:18:12.858 [2024-07-26 21:21:47.652313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:18:12.858 [2024-07-26 21:21:47.652333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:18:12.858 [2024-07-26 21:21:47.652356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:18:12.858 [2024-07-26 21:21:47.652417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:18:12.858 [2024-07-26 21:21:47.652438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182000 00:18:12.858 [2024-07-26 21:21:47.652458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:18:12.858 [2024-07-26 21:21:47.652478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:18:12.858 [2024-07-26 21:21:47.652503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:18:12.858 [2024-07-26 21:21:47.652523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:18:12.858 [2024-07-26 21:21:47.652543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:18:12.858 [2024-07-26 21:21:47.652563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:18:12.858 [2024-07-26 21:21:47.652583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c756000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c735000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c714000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6f3000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6d2000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6b1000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c690000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x182300 00:18:12.858 [2024-07-26 21:21:47.652811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.858 [2024-07-26 21:21:47.652822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x182300 00:18:12.859 [2024-07-26 21:21:47.652831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.859 [2024-07-26 21:21:47.652842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x182300 00:18:12.859 [2024-07-26 21:21:47.652851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.859 [2024-07-26 21:21:47.652862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x182300 00:18:12.859 [2024-07-26 21:21:47.652871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.859 [2024-07-26 21:21:47.652882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9c9000 len:0x10000 key:0x182300 00:18:12.859 [2024-07-26 21:21:47.652891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.859 [2024-07-26 21:21:47.652902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9a8000 len:0x10000 key:0x182300 00:18:12.859 [2024-07-26 21:21:47.652911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.859 [2024-07-26 21:21:47.652922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c147000 len:0x10000 key:0x182300 00:18:12.859 [2024-07-26 21:21:47.652931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.859 [2024-07-26 21:21:47.652944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x182300 00:18:12.859 [2024-07-26 21:21:47.652953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25512 cdw0:29a13000 sqhd:a486 p:1 m:0 dnr:0 00:18:12.859 [2024-07-26 21:21:47.655171] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:18:12.859 [2024-07-26 21:21:47.656051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.859 task offset: 23040 on job bdev=Nvme0n1 fails 00:18:12.859 00:18:12.859 Latency(us) 00:18:12.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.859 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.859 Job: Nvme0n1 ended in about 1.59 seconds with error 00:18:12.859 Verification LBA range: start 0x0 length 0x400 00:18:12.859 Nvme0n1 : 1.59 2028.75 126.80 40.15 0.00 30758.94 3486.52 1013343.85 00:18:12.859 =================================================================================================================== 00:18:12.859 Total : 2028.75 126.80 40.15 0.00 30758.94 3486.52 1013343.85 00:18:12.859 [2024-07-26 21:21:47.657702] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:12.859 21:21:47 -- target/host_management.sh@91 -- # kill -9 1672958 00:18:12.859 21:21:47 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:12.859 21:21:47 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:12.859 21:21:47 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:12.859 21:21:47 -- nvmf/common.sh@520 -- # config=() 00:18:12.859 21:21:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:12.859 21:21:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:12.859 21:21:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:12.859 { 00:18:12.859 "params": { 00:18:12.859 "name": "Nvme$subsystem", 00:18:12.859 "trtype": "$TEST_TRANSPORT", 00:18:12.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.859 "adrfam": "ipv4", 00:18:12.859 "trsvcid": "$NVMF_PORT", 00:18:12.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.859 "hdgst": ${hdgst:-false}, 00:18:12.859 "ddgst": ${ddgst:-false} 00:18:12.859 }, 00:18:12.859 "method": "bdev_nvme_attach_controller" 00:18:12.859 } 00:18:12.859 EOF 00:18:12.859 )") 00:18:12.859 21:21:47 -- nvmf/common.sh@542 -- # cat 00:18:12.859 21:21:47 -- nvmf/common.sh@544 -- # jq . 00:18:12.859 21:21:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:12.859 21:21:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:12.859 "params": { 00:18:12.859 "name": "Nvme0", 00:18:12.859 "trtype": "rdma", 00:18:12.859 "traddr": "192.168.100.8", 00:18:12.859 "adrfam": "ipv4", 00:18:12.859 "trsvcid": "4420", 00:18:12.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:12.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:12.859 "hdgst": false, 00:18:12.859 "ddgst": false 00:18:12.859 }, 00:18:12.859 "method": "bdev_nvme_attach_controller" 00:18:12.859 }' 00:18:12.859 [2024-07-26 21:21:47.713776] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:12.859 [2024-07-26 21:21:47.713828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673244 ] 00:18:13.118 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.118 [2024-07-26 21:21:47.801241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.118 [2024-07-26 21:21:47.838089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.377 Running I/O for 1 seconds... 00:18:14.341 00:18:14.341 Latency(us) 00:18:14.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.341 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:14.341 Verification LBA range: start 0x0 length 0x400 00:18:14.341 Nvme0n1 : 1.00 5568.37 348.02 0.00 0.00 11318.55 337.51 24851.25 00:18:14.341 =================================================================================================================== 00:18:14.341 Total : 5568.37 348.02 0.00 0.00 11318.55 337.51 24851.25 00:18:14.341 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1672958 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:18:14.341 21:21:49 -- target/host_management.sh@101 -- # stoptarget 00:18:14.341 21:21:49 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:14.341 21:21:49 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:14.600 21:21:49 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:14.600 21:21:49 -- target/host_management.sh@40 -- # nvmftestfini 00:18:14.600 21:21:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:14.600 21:21:49 -- nvmf/common.sh@116 -- # sync 00:18:14.600 21:21:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:14.600 21:21:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:14.600 21:21:49 -- nvmf/common.sh@119 -- # set +e 00:18:14.600 21:21:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:14.600 21:21:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:14.600 rmmod nvme_rdma 00:18:14.600 rmmod nvme_fabrics 00:18:14.600 21:21:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:14.600 21:21:49 -- nvmf/common.sh@123 -- # set -e 00:18:14.600 21:21:49 -- nvmf/common.sh@124 -- # return 0 00:18:14.600 21:21:49 -- nvmf/common.sh@477 -- # '[' -n 1672651 ']' 00:18:14.600 21:21:49 -- nvmf/common.sh@478 -- # killprocess 1672651 00:18:14.600 21:21:49 -- common/autotest_common.sh@926 -- # '[' -z 1672651 ']' 00:18:14.600 21:21:49 -- common/autotest_common.sh@930 -- # kill -0 1672651 00:18:14.600 21:21:49 -- common/autotest_common.sh@931 -- # uname 00:18:14.600 21:21:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:14.600 21:21:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1672651 00:18:14.600 21:21:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:14.600 21:21:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:14.600 21:21:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1672651' 00:18:14.600 killing process with pid 1672651 00:18:14.600 21:21:49 -- common/autotest_common.sh@945 -- # kill 1672651 00:18:14.600 21:21:49 -- common/autotest_common.sh@950 -- # wait 1672651 00:18:14.859 [2024-07-26 21:21:49.565702] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:14.859 21:21:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:14.859 21:21:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:14.859 00:18:14.859 real 0m5.034s 00:18:14.859 user 0m22.507s 00:18:14.859 sys 0m1.064s 00:18:14.859 21:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.859 21:21:49 -- common/autotest_common.sh@10 -- # set +x 00:18:14.859 ************************************ 00:18:14.859 END TEST nvmf_host_management 00:18:14.859 ************************************ 00:18:14.859 21:21:49 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:14.859 00:18:14.859 real 0m13.368s 00:18:14.859 user 0m24.871s 00:18:14.859 sys 0m7.307s 00:18:14.859 21:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.859 21:21:49 -- common/autotest_common.sh@10 -- # set +x 00:18:14.859 ************************************ 00:18:14.859 END TEST nvmf_host_management 00:18:14.859 ************************************ 00:18:14.859 21:21:49 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:18:14.859 21:21:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:14.859 21:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:14.859 21:21:49 -- common/autotest_common.sh@10 -- # set +x 00:18:14.859 ************************************ 00:18:14.859 START TEST nvmf_lvol 00:18:14.859 ************************************ 00:18:14.859 21:21:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:18:15.119 * Looking for test storage... 00:18:15.119 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:15.119 21:21:49 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.119 21:21:49 -- nvmf/common.sh@7 -- # uname -s 00:18:15.119 21:21:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.119 21:21:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.119 21:21:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.119 21:21:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.119 21:21:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.119 21:21:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.119 21:21:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.119 21:21:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.119 21:21:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.119 21:21:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.119 21:21:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:15.119 21:21:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:15.119 21:21:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.119 21:21:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.119 21:21:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.119 21:21:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:15.119 21:21:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.119 21:21:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.119 21:21:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.119 21:21:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.119 21:21:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.119 21:21:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.119 21:21:49 -- paths/export.sh@5 -- # export PATH 00:18:15.119 21:21:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.119 21:21:49 -- nvmf/common.sh@46 -- # : 0 00:18:15.119 21:21:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:15.119 21:21:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:15.119 21:21:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:15.119 21:21:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.119 21:21:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.119 21:21:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:15.119 21:21:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:15.119 21:21:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:15.119 21:21:49 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.119 21:21:49 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.119 21:21:49 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:15.119 21:21:49 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:15.119 21:21:49 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:15.119 21:21:49 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:15.119 21:21:49 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:15.119 21:21:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.119 21:21:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:15.119 21:21:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:15.119 21:21:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:15.119 21:21:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.119 21:21:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.120 21:21:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.120 21:21:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:15.120 21:21:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:15.120 21:21:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:15.120 21:21:49 -- common/autotest_common.sh@10 -- # set +x 00:18:23.236 21:21:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:23.236 21:21:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:23.236 21:21:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:23.236 21:21:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:23.236 21:21:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:23.236 21:21:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:23.237 21:21:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:23.237 21:21:57 -- nvmf/common.sh@294 -- # net_devs=() 00:18:23.237 21:21:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:23.237 21:21:57 -- nvmf/common.sh@295 -- # e810=() 00:18:23.237 21:21:57 -- nvmf/common.sh@295 -- # local -ga e810 00:18:23.237 21:21:57 -- nvmf/common.sh@296 -- # x722=() 00:18:23.237 21:21:57 -- nvmf/common.sh@296 -- # local -ga x722 00:18:23.237 21:21:57 -- nvmf/common.sh@297 -- # mlx=() 00:18:23.237 21:21:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:23.237 21:21:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.237 21:21:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:23.237 21:21:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:23.237 21:21:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:23.237 21:21:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:23.237 21:21:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:23.237 21:21:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:23.237 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:23.237 21:21:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:23.237 21:21:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:23.237 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:23.237 21:21:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:23.237 21:21:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:23.237 21:21:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.237 21:21:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:23.237 21:21:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.237 21:21:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:23.237 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:23.237 21:21:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.237 21:21:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.237 21:21:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:23.237 21:21:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.237 21:21:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:23.237 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:23.237 21:21:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.237 21:21:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:23.237 21:21:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:23.237 21:21:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:23.237 21:21:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:23.237 21:21:57 -- nvmf/common.sh@57 -- # uname 00:18:23.237 21:21:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:23.237 21:21:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:23.237 21:21:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:23.237 21:21:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:23.237 21:21:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:23.237 21:21:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:23.237 21:21:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:23.237 21:21:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:23.237 21:21:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:23.237 21:21:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:23.237 21:21:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:23.237 21:21:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:23.237 21:21:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:23.237 21:21:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:23.237 21:21:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:23.237 21:21:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:23.237 21:21:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:23.237 21:21:57 -- nvmf/common.sh@104 -- # continue 2 00:18:23.237 21:21:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:23.237 21:21:57 -- nvmf/common.sh@104 -- # continue 2 00:18:23.237 21:21:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:23.237 21:21:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:23.237 21:21:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:23.237 21:21:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:23.237 21:21:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:23.237 21:21:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:23.237 21:21:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:23.237 21:21:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:23.237 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:23.237 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:23.237 altname enp217s0f0np0 00:18:23.237 altname ens818f0np0 00:18:23.237 inet 192.168.100.8/24 scope global mlx_0_0 00:18:23.237 valid_lft forever preferred_lft forever 00:18:23.237 21:21:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:23.237 21:21:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:23.237 21:21:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:23.237 21:21:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:23.237 21:21:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:23.237 21:21:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:23.237 21:21:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:23.237 21:21:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:23.237 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:23.237 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:23.237 altname enp217s0f1np1 00:18:23.237 altname ens818f1np1 00:18:23.237 inet 192.168.100.9/24 scope global mlx_0_1 00:18:23.237 valid_lft forever preferred_lft forever 00:18:23.237 21:21:57 -- nvmf/common.sh@410 -- # return 0 00:18:23.237 21:21:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:23.237 21:21:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:23.237 21:21:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:23.237 21:21:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:23.237 21:21:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:23.237 21:21:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:23.237 21:21:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:23.237 21:21:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:23.237 21:21:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:23.237 21:21:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:23.237 21:21:57 -- nvmf/common.sh@104 -- # continue 2 00:18:23.237 21:21:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.237 21:21:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:23.237 21:21:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:23.237 21:21:57 -- nvmf/common.sh@104 -- # continue 2 00:18:23.237 21:21:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:23.237 21:21:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:23.237 21:21:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:23.238 21:21:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:23.238 21:21:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:23.238 21:21:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:23.238 21:21:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:23.238 21:21:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:23.238 21:21:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:23.238 21:21:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:23.238 21:21:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:23.238 21:21:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:23.238 21:21:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:23.238 192.168.100.9' 00:18:23.238 21:21:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:23.238 192.168.100.9' 00:18:23.238 21:21:57 -- nvmf/common.sh@445 -- # head -n 1 00:18:23.238 21:21:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:23.238 21:21:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:23.238 192.168.100.9' 00:18:23.238 21:21:57 -- nvmf/common.sh@446 -- # tail -n +2 00:18:23.238 21:21:57 -- nvmf/common.sh@446 -- # head -n 1 00:18:23.238 21:21:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:23.238 21:21:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:23.238 21:21:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:23.238 21:21:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:23.238 21:21:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:23.238 21:21:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:23.238 21:21:57 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:23.238 21:21:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:23.238 21:21:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:23.238 21:21:57 -- common/autotest_common.sh@10 -- # set +x 00:18:23.238 21:21:57 -- nvmf/common.sh@469 -- # nvmfpid=1677602 00:18:23.238 21:21:57 -- nvmf/common.sh@470 -- # waitforlisten 1677602 00:18:23.238 21:21:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:23.238 21:21:57 -- common/autotest_common.sh@819 -- # '[' -z 1677602 ']' 00:18:23.238 21:21:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.238 21:21:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:23.238 21:21:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.238 21:21:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:23.238 21:21:57 -- common/autotest_common.sh@10 -- # set +x 00:18:23.238 [2024-07-26 21:21:57.681504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:23.238 [2024-07-26 21:21:57.681564] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.238 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.238 [2024-07-26 21:21:57.769537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:23.238 [2024-07-26 21:21:57.807150] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:23.238 [2024-07-26 21:21:57.807269] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.238 [2024-07-26 21:21:57.807280] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.238 [2024-07-26 21:21:57.807289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.238 [2024-07-26 21:21:57.807462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.238 [2024-07-26 21:21:57.807481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.238 [2024-07-26 21:21:57.807483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.804 21:21:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:23.804 21:21:58 -- common/autotest_common.sh@852 -- # return 0 00:18:23.804 21:21:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:23.804 21:21:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:23.804 21:21:58 -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 21:21:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.804 21:21:58 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:24.063 [2024-07-26 21:21:58.684911] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x186c560/0x1870a50) succeed. 00:18:24.063 [2024-07-26 21:21:58.695143] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x186dab0/0x18b20e0) succeed. 00:18:24.063 21:21:58 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.322 21:21:58 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:24.322 21:21:58 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.322 21:21:59 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:24.322 21:21:59 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:24.581 21:21:59 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:24.840 21:21:59 -- target/nvmf_lvol.sh@29 -- # lvs=7cd57fc0-a444-49e5-8c59-59e2e1644f12 00:18:24.840 21:21:59 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7cd57fc0-a444-49e5-8c59-59e2e1644f12 lvol 20 00:18:25.098 21:21:59 -- target/nvmf_lvol.sh@32 -- # lvol=86483970-eaf1-4799-92cc-e75e8a6a5712 00:18:25.098 21:21:59 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:25.098 21:21:59 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86483970-eaf1-4799-92cc-e75e8a6a5712 00:18:25.357 21:22:00 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:25.615 [2024-07-26 21:22:00.242593] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:25.615 21:22:00 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:25.615 21:22:00 -- target/nvmf_lvol.sh@42 -- # perf_pid=1678032 00:18:25.615 21:22:00 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:25.615 21:22:00 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:25.615 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.992 21:22:01 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 86483970-eaf1-4799-92cc-e75e8a6a5712 MY_SNAPSHOT 00:18:26.992 21:22:01 -- target/nvmf_lvol.sh@47 -- # snapshot=7e2fa001-2583-4885-b30e-e94140f45bc4 00:18:26.992 21:22:01 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 86483970-eaf1-4799-92cc-e75e8a6a5712 30 00:18:26.992 21:22:01 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7e2fa001-2583-4885-b30e-e94140f45bc4 MY_CLONE 00:18:27.250 21:22:01 -- target/nvmf_lvol.sh@49 -- # clone=e52604ab-4de9-40ef-94cb-46651a4c8b18 00:18:27.250 21:22:01 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e52604ab-4de9-40ef-94cb-46651a4c8b18 00:18:27.509 21:22:02 -- target/nvmf_lvol.sh@53 -- # wait 1678032 00:18:37.483 Initializing NVMe Controllers 00:18:37.483 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:37.483 Controller IO queue size 128, less than required. 00:18:37.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:37.483 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:37.483 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:37.483 Initialization complete. Launching workers. 00:18:37.484 ======================================================== 00:18:37.484 Latency(us) 00:18:37.484 Device Information : IOPS MiB/s Average min max 00:18:37.484 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16664.80 65.10 7682.89 2263.80 35755.83 00:18:37.484 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16604.80 64.86 7710.17 3460.30 36861.60 00:18:37.484 ======================================================== 00:18:37.484 Total : 33269.60 129.96 7696.50 2263.80 36861.60 00:18:37.484 00:18:37.484 21:22:11 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:37.484 21:22:11 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86483970-eaf1-4799-92cc-e75e8a6a5712 00:18:37.484 21:22:12 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7cd57fc0-a444-49e5-8c59-59e2e1644f12 00:18:37.484 21:22:12 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:37.484 21:22:12 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:37.484 21:22:12 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:37.484 21:22:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:37.484 21:22:12 -- nvmf/common.sh@116 -- # sync 00:18:37.484 21:22:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:37.484 21:22:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:37.484 21:22:12 -- nvmf/common.sh@119 -- # set +e 00:18:37.484 21:22:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:37.484 21:22:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:37.742 rmmod nvme_rdma 00:18:37.742 rmmod nvme_fabrics 00:18:37.742 21:22:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:37.742 21:22:12 -- nvmf/common.sh@123 -- # set -e 00:18:37.742 21:22:12 -- nvmf/common.sh@124 -- # return 0 00:18:37.742 21:22:12 -- nvmf/common.sh@477 -- # '[' -n 1677602 ']' 00:18:37.742 21:22:12 -- nvmf/common.sh@478 -- # killprocess 1677602 00:18:37.742 21:22:12 -- common/autotest_common.sh@926 -- # '[' -z 1677602 ']' 00:18:37.742 21:22:12 -- common/autotest_common.sh@930 -- # kill -0 1677602 00:18:37.742 21:22:12 -- common/autotest_common.sh@931 -- # uname 00:18:37.742 21:22:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:37.742 21:22:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1677602 00:18:37.742 21:22:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:37.742 21:22:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:37.742 21:22:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1677602' 00:18:37.742 killing process with pid 1677602 00:18:37.742 21:22:12 -- common/autotest_common.sh@945 -- # kill 1677602 00:18:37.742 21:22:12 -- common/autotest_common.sh@950 -- # wait 1677602 00:18:38.001 21:22:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:38.001 21:22:12 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:38.001 00:18:38.001 real 0m23.044s 00:18:38.001 user 1m11.075s 00:18:38.001 sys 0m7.316s 00:18:38.001 21:22:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.001 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:18:38.001 ************************************ 00:18:38.001 END TEST nvmf_lvol 00:18:38.001 ************************************ 00:18:38.001 21:22:12 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:38.001 21:22:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:38.001 21:22:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:38.001 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:18:38.001 ************************************ 00:18:38.001 START TEST nvmf_lvs_grow 00:18:38.001 ************************************ 00:18:38.001 21:22:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:38.001 * Looking for test storage... 00:18:38.001 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.001 21:22:12 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.001 21:22:12 -- nvmf/common.sh@7 -- # uname -s 00:18:38.261 21:22:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.261 21:22:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.261 21:22:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.261 21:22:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.261 21:22:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.261 21:22:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.261 21:22:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.261 21:22:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.261 21:22:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.261 21:22:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.261 21:22:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:38.261 21:22:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:38.261 21:22:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.261 21:22:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.261 21:22:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.261 21:22:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.261 21:22:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.261 21:22:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.261 21:22:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.261 21:22:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.261 21:22:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.261 21:22:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.261 21:22:12 -- paths/export.sh@5 -- # export PATH 00:18:38.261 21:22:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.261 21:22:12 -- nvmf/common.sh@46 -- # : 0 00:18:38.261 21:22:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:38.261 21:22:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:38.261 21:22:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:38.261 21:22:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.261 21:22:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.261 21:22:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:38.261 21:22:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:38.261 21:22:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:38.261 21:22:12 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:38.261 21:22:12 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.261 21:22:12 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:38.261 21:22:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:38.261 21:22:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.261 21:22:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:38.261 21:22:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:38.261 21:22:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:38.261 21:22:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.261 21:22:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.261 21:22:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.261 21:22:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:38.261 21:22:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:38.261 21:22:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:38.261 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.468 21:22:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:46.469 21:22:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:46.469 21:22:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:46.469 21:22:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:46.469 21:22:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:46.469 21:22:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:46.469 21:22:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:46.469 21:22:20 -- nvmf/common.sh@294 -- # net_devs=() 00:18:46.469 21:22:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:46.469 21:22:20 -- nvmf/common.sh@295 -- # e810=() 00:18:46.469 21:22:20 -- nvmf/common.sh@295 -- # local -ga e810 00:18:46.469 21:22:20 -- nvmf/common.sh@296 -- # x722=() 00:18:46.469 21:22:20 -- nvmf/common.sh@296 -- # local -ga x722 00:18:46.469 21:22:20 -- nvmf/common.sh@297 -- # mlx=() 00:18:46.469 21:22:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:46.469 21:22:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.469 21:22:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:46.469 21:22:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:46.469 21:22:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:46.469 21:22:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:46.469 21:22:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:46.469 21:22:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:46.469 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:46.469 21:22:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:46.469 21:22:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:46.469 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:46.469 21:22:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:46.469 21:22:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:46.469 21:22:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.469 21:22:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:46.469 21:22:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.469 21:22:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:46.469 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:46.469 21:22:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.469 21:22:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.469 21:22:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:46.469 21:22:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.469 21:22:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:46.469 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:46.469 21:22:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.469 21:22:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:46.469 21:22:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:46.469 21:22:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:46.469 21:22:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:46.469 21:22:20 -- nvmf/common.sh@57 -- # uname 00:18:46.469 21:22:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:46.469 21:22:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:46.469 21:22:20 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:46.469 21:22:20 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:46.469 21:22:20 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:46.469 21:22:20 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:46.469 21:22:20 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:46.469 21:22:20 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:46.469 21:22:20 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:46.469 21:22:20 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:46.469 21:22:20 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:46.469 21:22:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:46.469 21:22:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:46.469 21:22:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:46.469 21:22:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:46.469 21:22:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:46.469 21:22:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:46.469 21:22:20 -- nvmf/common.sh@104 -- # continue 2 00:18:46.469 21:22:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:46.469 21:22:20 -- nvmf/common.sh@104 -- # continue 2 00:18:46.469 21:22:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:46.469 21:22:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:46.469 21:22:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:46.469 21:22:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:46.469 21:22:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:46.469 21:22:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:46.469 21:22:20 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:46.469 21:22:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:46.469 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:46.469 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:46.469 altname enp217s0f0np0 00:18:46.469 altname ens818f0np0 00:18:46.469 inet 192.168.100.8/24 scope global mlx_0_0 00:18:46.469 valid_lft forever preferred_lft forever 00:18:46.469 21:22:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:46.469 21:22:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:46.469 21:22:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:46.469 21:22:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:46.469 21:22:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:46.469 21:22:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:46.469 21:22:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:46.469 21:22:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:46.469 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:46.469 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:46.469 altname enp217s0f1np1 00:18:46.469 altname ens818f1np1 00:18:46.469 inet 192.168.100.9/24 scope global mlx_0_1 00:18:46.469 valid_lft forever preferred_lft forever 00:18:46.469 21:22:20 -- nvmf/common.sh@410 -- # return 0 00:18:46.469 21:22:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:46.469 21:22:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:46.469 21:22:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:46.469 21:22:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:46.469 21:22:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:46.469 21:22:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:46.469 21:22:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:46.469 21:22:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:46.469 21:22:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:46.469 21:22:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:46.469 21:22:20 -- nvmf/common.sh@104 -- # continue 2 00:18:46.469 21:22:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.469 21:22:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:46.469 21:22:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:46.469 21:22:20 -- nvmf/common.sh@104 -- # continue 2 00:18:46.470 21:22:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:46.470 21:22:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:46.470 21:22:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:46.470 21:22:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:46.470 21:22:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:46.470 21:22:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:46.470 21:22:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:46.470 21:22:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:46.470 21:22:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:46.470 21:22:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:46.470 21:22:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:46.470 21:22:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:46.470 21:22:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:46.470 192.168.100.9' 00:18:46.470 21:22:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:46.470 192.168.100.9' 00:18:46.470 21:22:20 -- nvmf/common.sh@445 -- # head -n 1 00:18:46.470 21:22:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:46.470 21:22:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:46.470 192.168.100.9' 00:18:46.470 21:22:20 -- nvmf/common.sh@446 -- # tail -n +2 00:18:46.470 21:22:20 -- nvmf/common.sh@446 -- # head -n 1 00:18:46.470 21:22:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:46.470 21:22:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:46.470 21:22:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:46.470 21:22:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:46.470 21:22:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:46.470 21:22:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:46.470 21:22:20 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:46.470 21:22:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:46.470 21:22:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:46.470 21:22:20 -- common/autotest_common.sh@10 -- # set +x 00:18:46.470 21:22:20 -- nvmf/common.sh@469 -- # nvmfpid=1684110 00:18:46.470 21:22:20 -- nvmf/common.sh@470 -- # waitforlisten 1684110 00:18:46.470 21:22:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:46.470 21:22:20 -- common/autotest_common.sh@819 -- # '[' -z 1684110 ']' 00:18:46.470 21:22:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.470 21:22:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:46.470 21:22:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.470 21:22:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:46.470 21:22:20 -- common/autotest_common.sh@10 -- # set +x 00:18:46.470 [2024-07-26 21:22:20.820775] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:46.470 [2024-07-26 21:22:20.820854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.470 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.470 [2024-07-26 21:22:20.908547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.470 [2024-07-26 21:22:20.945232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:46.470 [2024-07-26 21:22:20.945341] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.470 [2024-07-26 21:22:20.945350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.470 [2024-07-26 21:22:20.945358] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.470 [2024-07-26 21:22:20.945383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.038 21:22:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:47.038 21:22:21 -- common/autotest_common.sh@852 -- # return 0 00:18:47.038 21:22:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:47.038 21:22:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:47.038 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:47.038 21:22:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:47.038 [2024-07-26 21:22:21.825581] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x221df50/0x2222440) succeed. 00:18:47.038 [2024-07-26 21:22:21.834157] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x221f450/0x2263ad0) succeed. 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:47.038 21:22:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:47.038 21:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.038 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:47.038 ************************************ 00:18:47.038 START TEST lvs_grow_clean 00:18:47.038 ************************************ 00:18:47.038 21:22:21 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:47.038 21:22:21 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.297 21:22:21 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.297 21:22:21 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:47.297 21:22:22 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:47.297 21:22:22 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:47.556 21:22:22 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7813cf5a-5e0f-4564-9632-f955bc45ff80 00:18:47.556 21:22:22 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:18:47.556 21:22:22 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:47.814 21:22:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:47.814 21:22:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:47.814 21:22:22 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 lvol 150 00:18:47.814 21:22:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f140249a-e68d-472a-b3b8-d88f249fc96c 00:18:47.814 21:22:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.814 21:22:22 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:48.073 [2024-07-26 21:22:22.751042] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:48.073 [2024-07-26 21:22:22.751092] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:48.073 true 00:18:48.073 21:22:22 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:18:48.073 21:22:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:48.073 21:22:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:48.073 21:22:22 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:48.332 21:22:23 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f140249a-e68d-472a-b3b8-d88f249fc96c 00:18:48.589 21:22:23 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:48.589 [2024-07-26 21:22:23.417230] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:48.589 21:22:23 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:48.846 21:22:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1684692 00:18:48.846 21:22:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.846 21:22:23 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:48.846 21:22:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1684692 /var/tmp/bdevperf.sock 00:18:48.846 21:22:23 -- common/autotest_common.sh@819 -- # '[' -z 1684692 ']' 00:18:48.846 21:22:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.846 21:22:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:48.846 21:22:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.846 21:22:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:48.846 21:22:23 -- common/autotest_common.sh@10 -- # set +x 00:18:48.846 [2024-07-26 21:22:23.664126] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:48.846 [2024-07-26 21:22:23.664177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684692 ] 00:18:48.846 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.104 [2024-07-26 21:22:23.747084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.104 [2024-07-26 21:22:23.784145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.671 21:22:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:49.671 21:22:24 -- common/autotest_common.sh@852 -- # return 0 00:18:49.671 21:22:24 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:49.930 Nvme0n1 00:18:49.930 21:22:24 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:50.189 [ 00:18:50.189 { 00:18:50.189 "name": "Nvme0n1", 00:18:50.189 "aliases": [ 00:18:50.189 "f140249a-e68d-472a-b3b8-d88f249fc96c" 00:18:50.189 ], 00:18:50.189 "product_name": "NVMe disk", 00:18:50.189 "block_size": 4096, 00:18:50.189 "num_blocks": 38912, 00:18:50.189 "uuid": "f140249a-e68d-472a-b3b8-d88f249fc96c", 00:18:50.189 "assigned_rate_limits": { 00:18:50.189 "rw_ios_per_sec": 0, 00:18:50.189 "rw_mbytes_per_sec": 0, 00:18:50.189 "r_mbytes_per_sec": 0, 00:18:50.189 "w_mbytes_per_sec": 0 00:18:50.189 }, 00:18:50.189 "claimed": false, 00:18:50.189 "zoned": false, 00:18:50.189 "supported_io_types": { 00:18:50.189 "read": true, 00:18:50.189 "write": true, 00:18:50.189 "unmap": true, 00:18:50.189 "write_zeroes": true, 00:18:50.189 "flush": true, 00:18:50.189 "reset": true, 00:18:50.189 "compare": true, 00:18:50.189 "compare_and_write": true, 00:18:50.189 "abort": true, 00:18:50.189 "nvme_admin": true, 00:18:50.189 "nvme_io": true 00:18:50.189 }, 00:18:50.189 "memory_domains": [ 00:18:50.189 { 00:18:50.189 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:50.189 "dma_device_type": 0 00:18:50.189 } 00:18:50.189 ], 00:18:50.189 "driver_specific": { 00:18:50.189 "nvme": [ 00:18:50.189 { 00:18:50.189 "trid": { 00:18:50.189 "trtype": "RDMA", 00:18:50.189 "adrfam": "IPv4", 00:18:50.189 "traddr": "192.168.100.8", 00:18:50.189 "trsvcid": "4420", 00:18:50.189 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:50.189 }, 00:18:50.189 "ctrlr_data": { 00:18:50.189 "cntlid": 1, 00:18:50.189 "vendor_id": "0x8086", 00:18:50.189 "model_number": "SPDK bdev Controller", 00:18:50.189 "serial_number": "SPDK0", 00:18:50.189 "firmware_revision": "24.01.1", 00:18:50.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:50.189 "oacs": { 00:18:50.189 "security": 0, 00:18:50.189 "format": 0, 00:18:50.189 "firmware": 0, 00:18:50.189 "ns_manage": 0 00:18:50.189 }, 00:18:50.189 "multi_ctrlr": true, 00:18:50.189 "ana_reporting": false 00:18:50.189 }, 00:18:50.189 "vs": { 00:18:50.189 "nvme_version": "1.3" 00:18:50.189 }, 00:18:50.189 "ns_data": { 00:18:50.189 "id": 1, 00:18:50.189 "can_share": true 00:18:50.189 } 00:18:50.189 } 00:18:50.189 ], 00:18:50.189 "mp_policy": "active_passive" 00:18:50.189 } 00:18:50.189 } 00:18:50.189 ] 00:18:50.189 21:22:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1684960 00:18:50.189 21:22:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:50.189 21:22:24 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.189 Running I/O for 10 seconds... 00:18:51.126 Latency(us) 00:18:51.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.126 Nvme0n1 : 1.00 36740.00 143.52 0.00 0.00 0.00 0.00 0.00 00:18:51.126 =================================================================================================================== 00:18:51.126 Total : 36740.00 143.52 0.00 0.00 0.00 0.00 0.00 00:18:51.126 00:18:52.063 21:22:26 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:18:52.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.322 Nvme0n1 : 2.00 37056.50 144.75 0.00 0.00 0.00 0.00 0.00 00:18:52.322 =================================================================================================================== 00:18:52.322 Total : 37056.50 144.75 0.00 0.00 0.00 0.00 0.00 00:18:52.322 00:18:52.322 true 00:18:52.322 21:22:27 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:18:52.322 21:22:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:52.581 21:22:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:52.581 21:22:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:52.581 21:22:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 1684960 00:18:53.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.149 Nvme0n1 : 3.00 37100.33 144.92 0.00 0.00 0.00 0.00 0.00 00:18:53.149 =================================================================================================================== 00:18:53.149 Total : 37100.33 144.92 0.00 0.00 0.00 0.00 0.00 00:18:53.149 00:18:54.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.086 Nvme0n1 : 4.00 37240.00 145.47 0.00 0.00 0.00 0.00 0.00 00:18:54.086 =================================================================================================================== 00:18:54.086 Total : 37240.00 145.47 0.00 0.00 0.00 0.00 0.00 00:18:54.086 00:18:55.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.465 Nvme0n1 : 5.00 37299.00 145.70 0.00 0.00 0.00 0.00 0.00 00:18:55.465 =================================================================================================================== 00:18:55.465 Total : 37299.00 145.70 0.00 0.00 0.00 0.00 0.00 00:18:55.465 00:18:56.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.403 Nvme0n1 : 6.00 37322.17 145.79 0.00 0.00 0.00 0.00 0.00 00:18:56.403 =================================================================================================================== 00:18:56.403 Total : 37322.17 145.79 0.00 0.00 0.00 0.00 0.00 00:18:56.403 00:18:57.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.341 Nvme0n1 : 7.00 37376.43 146.00 0.00 0.00 0.00 0.00 0.00 00:18:57.341 =================================================================================================================== 00:18:57.341 Total : 37376.43 146.00 0.00 0.00 0.00 0.00 0.00 00:18:57.341 00:18:58.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:58.278 Nvme0n1 : 8.00 37376.38 146.00 0.00 0.00 0.00 0.00 0.00 00:18:58.278 =================================================================================================================== 00:18:58.278 Total : 37376.38 146.00 0.00 0.00 0.00 0.00 0.00 00:18:58.278 00:18:59.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.215 Nvme0n1 : 9.00 37405.00 146.11 0.00 0.00 0.00 0.00 0.00 00:18:59.215 =================================================================================================================== 00:18:59.215 Total : 37405.00 146.11 0.00 0.00 0.00 0.00 0.00 00:18:59.215 00:19:00.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.152 Nvme0n1 : 10.00 37433.20 146.22 0.00 0.00 0.00 0.00 0.00 00:19:00.152 =================================================================================================================== 00:19:00.152 Total : 37433.20 146.22 0.00 0.00 0.00 0.00 0.00 00:19:00.152 00:19:00.152 00:19:00.152 Latency(us) 00:19:00.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.152 Nvme0n1 : 10.00 37433.61 146.23 0.00 0.00 3416.67 2542.80 7864.32 00:19:00.152 =================================================================================================================== 00:19:00.152 Total : 37433.61 146.23 0.00 0.00 3416.67 2542.80 7864.32 00:19:00.152 0 00:19:00.152 21:22:34 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1684692 00:19:00.152 21:22:34 -- common/autotest_common.sh@926 -- # '[' -z 1684692 ']' 00:19:00.152 21:22:34 -- common/autotest_common.sh@930 -- # kill -0 1684692 00:19:00.152 21:22:34 -- common/autotest_common.sh@931 -- # uname 00:19:00.152 21:22:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:00.152 21:22:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1684692 00:19:00.411 21:22:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:00.411 21:22:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:00.411 21:22:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1684692' 00:19:00.411 killing process with pid 1684692 00:19:00.411 21:22:35 -- common/autotest_common.sh@945 -- # kill 1684692 00:19:00.411 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.411 00:19:00.411 Latency(us) 00:19:00.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.412 =================================================================================================================== 00:19:00.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.412 21:22:35 -- common/autotest_common.sh@950 -- # wait 1684692 00:19:00.412 21:22:35 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:00.673 21:22:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:00.673 21:22:35 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:19:00.975 21:22:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:00.975 21:22:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:19:00.975 21:22:35 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:00.975 [2024-07-26 21:22:35.765431] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:01.236 21:22:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:19:01.236 21:22:35 -- common/autotest_common.sh@640 -- # local es=0 00:19:01.236 21:22:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:19:01.236 21:22:35 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:01.236 21:22:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.236 21:22:35 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:01.236 21:22:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.236 21:22:35 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:01.236 21:22:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.236 21:22:35 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:01.236 21:22:35 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:01.236 21:22:35 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:19:01.236 request: 00:19:01.236 { 00:19:01.236 "uuid": "7813cf5a-5e0f-4564-9632-f955bc45ff80", 00:19:01.236 "method": "bdev_lvol_get_lvstores", 00:19:01.236 "req_id": 1 00:19:01.236 } 00:19:01.236 Got JSON-RPC error response 00:19:01.236 response: 00:19:01.236 { 00:19:01.236 "code": -19, 00:19:01.236 "message": "No such device" 00:19:01.236 } 00:19:01.236 21:22:35 -- common/autotest_common.sh@643 -- # es=1 00:19:01.236 21:22:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:01.236 21:22:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:01.236 21:22:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:01.236 21:22:35 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:01.494 aio_bdev 00:19:01.495 21:22:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f140249a-e68d-472a-b3b8-d88f249fc96c 00:19:01.495 21:22:36 -- common/autotest_common.sh@887 -- # local bdev_name=f140249a-e68d-472a-b3b8-d88f249fc96c 00:19:01.495 21:22:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.495 21:22:36 -- common/autotest_common.sh@889 -- # local i 00:19:01.495 21:22:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.495 21:22:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.495 21:22:36 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:01.495 21:22:36 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f140249a-e68d-472a-b3b8-d88f249fc96c -t 2000 00:19:01.754 [ 00:19:01.754 { 00:19:01.754 "name": "f140249a-e68d-472a-b3b8-d88f249fc96c", 00:19:01.754 "aliases": [ 00:19:01.754 "lvs/lvol" 00:19:01.754 ], 00:19:01.754 "product_name": "Logical Volume", 00:19:01.754 "block_size": 4096, 00:19:01.754 "num_blocks": 38912, 00:19:01.754 "uuid": "f140249a-e68d-472a-b3b8-d88f249fc96c", 00:19:01.754 "assigned_rate_limits": { 00:19:01.754 "rw_ios_per_sec": 0, 00:19:01.754 "rw_mbytes_per_sec": 0, 00:19:01.754 "r_mbytes_per_sec": 0, 00:19:01.754 "w_mbytes_per_sec": 0 00:19:01.754 }, 00:19:01.754 "claimed": false, 00:19:01.754 "zoned": false, 00:19:01.754 "supported_io_types": { 00:19:01.754 "read": true, 00:19:01.754 "write": true, 00:19:01.754 "unmap": true, 00:19:01.754 "write_zeroes": true, 00:19:01.754 "flush": false, 00:19:01.754 "reset": true, 00:19:01.754 "compare": false, 00:19:01.754 "compare_and_write": false, 00:19:01.754 "abort": false, 00:19:01.754 "nvme_admin": false, 00:19:01.754 "nvme_io": false 00:19:01.754 }, 00:19:01.754 "driver_specific": { 00:19:01.754 "lvol": { 00:19:01.754 "lvol_store_uuid": "7813cf5a-5e0f-4564-9632-f955bc45ff80", 00:19:01.754 "base_bdev": "aio_bdev", 00:19:01.754 "thin_provision": false, 00:19:01.754 "snapshot": false, 00:19:01.754 "clone": false, 00:19:01.754 "esnap_clone": false 00:19:01.754 } 00:19:01.754 } 00:19:01.754 } 00:19:01.754 ] 00:19:01.754 21:22:36 -- common/autotest_common.sh@895 -- # return 0 00:19:01.754 21:22:36 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:19:01.754 21:22:36 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:01.754 21:22:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:02.013 21:22:36 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:19:02.013 21:22:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:02.013 21:22:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:02.013 21:22:36 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f140249a-e68d-472a-b3b8-d88f249fc96c 00:19:02.272 21:22:36 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7813cf5a-5e0f-4564-9632-f955bc45ff80 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:02.531 00:19:02.531 real 0m15.421s 00:19:02.531 user 0m15.238s 00:19:02.531 sys 0m1.201s 00:19:02.531 21:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.531 21:22:37 -- common/autotest_common.sh@10 -- # set +x 00:19:02.531 ************************************ 00:19:02.531 END TEST lvs_grow_clean 00:19:02.531 ************************************ 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:02.531 21:22:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:02.531 21:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:02.531 21:22:37 -- common/autotest_common.sh@10 -- # set +x 00:19:02.531 ************************************ 00:19:02.531 START TEST lvs_grow_dirty 00:19:02.531 ************************************ 00:19:02.531 21:22:37 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:02.531 21:22:37 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:02.789 21:22:37 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:02.789 21:22:37 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:03.048 21:22:37 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c063dbec-1b4e-42b1-9482-be2def491755 00:19:03.048 21:22:37 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:03.048 21:22:37 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:03.048 21:22:37 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:03.048 21:22:37 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:03.048 21:22:37 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c063dbec-1b4e-42b1-9482-be2def491755 lvol 150 00:19:03.307 21:22:38 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2a1d2182-74cb-4b9a-871b-6d71ab9ab86d 00:19:03.307 21:22:38 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:03.307 21:22:38 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:03.566 [2024-07-26 21:22:38.219425] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:03.566 [2024-07-26 21:22:38.219475] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:03.566 true 00:19:03.566 21:22:38 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:03.566 21:22:38 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:03.566 21:22:38 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:03.566 21:22:38 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:03.826 21:22:38 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a1d2182-74cb-4b9a-871b-6d71ab9ab86d 00:19:04.085 21:22:38 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:04.085 21:22:38 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:04.345 21:22:39 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1687446 00:19:04.345 21:22:39 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:04.345 21:22:39 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:04.345 21:22:39 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1687446 /var/tmp/bdevperf.sock 00:19:04.345 21:22:39 -- common/autotest_common.sh@819 -- # '[' -z 1687446 ']' 00:19:04.345 21:22:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.345 21:22:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:04.345 21:22:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.345 21:22:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:04.345 21:22:39 -- common/autotest_common.sh@10 -- # set +x 00:19:04.345 [2024-07-26 21:22:39.088576] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:04.345 [2024-07-26 21:22:39.088639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687446 ] 00:19:04.345 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.345 [2024-07-26 21:22:39.172634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.345 [2024-07-26 21:22:39.208340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.283 21:22:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:05.283 21:22:39 -- common/autotest_common.sh@852 -- # return 0 00:19:05.283 21:22:39 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:05.283 Nvme0n1 00:19:05.283 21:22:40 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:05.542 [ 00:19:05.542 { 00:19:05.542 "name": "Nvme0n1", 00:19:05.542 "aliases": [ 00:19:05.542 "2a1d2182-74cb-4b9a-871b-6d71ab9ab86d" 00:19:05.542 ], 00:19:05.542 "product_name": "NVMe disk", 00:19:05.542 "block_size": 4096, 00:19:05.542 "num_blocks": 38912, 00:19:05.542 "uuid": "2a1d2182-74cb-4b9a-871b-6d71ab9ab86d", 00:19:05.542 "assigned_rate_limits": { 00:19:05.542 "rw_ios_per_sec": 0, 00:19:05.542 "rw_mbytes_per_sec": 0, 00:19:05.542 "r_mbytes_per_sec": 0, 00:19:05.542 "w_mbytes_per_sec": 0 00:19:05.542 }, 00:19:05.542 "claimed": false, 00:19:05.542 "zoned": false, 00:19:05.542 "supported_io_types": { 00:19:05.542 "read": true, 00:19:05.542 "write": true, 00:19:05.542 "unmap": true, 00:19:05.542 "write_zeroes": true, 00:19:05.542 "flush": true, 00:19:05.542 "reset": true, 00:19:05.542 "compare": true, 00:19:05.542 "compare_and_write": true, 00:19:05.542 "abort": true, 00:19:05.542 "nvme_admin": true, 00:19:05.542 "nvme_io": true 00:19:05.542 }, 00:19:05.542 "memory_domains": [ 00:19:05.542 { 00:19:05.542 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:05.542 "dma_device_type": 0 00:19:05.542 } 00:19:05.542 ], 00:19:05.542 "driver_specific": { 00:19:05.542 "nvme": [ 00:19:05.542 { 00:19:05.542 "trid": { 00:19:05.542 "trtype": "RDMA", 00:19:05.542 "adrfam": "IPv4", 00:19:05.542 "traddr": "192.168.100.8", 00:19:05.542 "trsvcid": "4420", 00:19:05.542 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:05.542 }, 00:19:05.542 "ctrlr_data": { 00:19:05.542 "cntlid": 1, 00:19:05.542 "vendor_id": "0x8086", 00:19:05.542 "model_number": "SPDK bdev Controller", 00:19:05.542 "serial_number": "SPDK0", 00:19:05.542 "firmware_revision": "24.01.1", 00:19:05.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:05.542 "oacs": { 00:19:05.542 "security": 0, 00:19:05.542 "format": 0, 00:19:05.542 "firmware": 0, 00:19:05.542 "ns_manage": 0 00:19:05.542 }, 00:19:05.542 "multi_ctrlr": true, 00:19:05.542 "ana_reporting": false 00:19:05.542 }, 00:19:05.542 "vs": { 00:19:05.542 "nvme_version": "1.3" 00:19:05.543 }, 00:19:05.543 "ns_data": { 00:19:05.543 "id": 1, 00:19:05.543 "can_share": true 00:19:05.543 } 00:19:05.543 } 00:19:05.543 ], 00:19:05.543 "mp_policy": "active_passive" 00:19:05.543 } 00:19:05.543 } 00:19:05.543 ] 00:19:05.543 21:22:40 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1687578 00:19:05.543 21:22:40 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:05.543 21:22:40 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.543 Running I/O for 10 seconds... 00:19:06.919 Latency(us) 00:19:06.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:06.919 Nvme0n1 : 1.00 36286.00 141.74 0.00 0.00 0.00 0.00 0.00 00:19:06.919 =================================================================================================================== 00:19:06.919 Total : 36286.00 141.74 0.00 0.00 0.00 0.00 0.00 00:19:06.919 00:19:07.485 21:22:42 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:07.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:07.743 Nvme0n1 : 2.00 36608.00 143.00 0.00 0.00 0.00 0.00 0.00 00:19:07.743 =================================================================================================================== 00:19:07.743 Total : 36608.00 143.00 0.00 0.00 0.00 0.00 0.00 00:19:07.743 00:19:07.743 true 00:19:07.743 21:22:42 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:07.743 21:22:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:08.001 21:22:42 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:08.001 21:22:42 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:08.001 21:22:42 -- target/nvmf_lvs_grow.sh@65 -- # wait 1687578 00:19:08.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:08.571 Nvme0n1 : 3.00 36863.67 144.00 0.00 0.00 0.00 0.00 0.00 00:19:08.571 =================================================================================================================== 00:19:08.571 Total : 36863.67 144.00 0.00 0.00 0.00 0.00 0.00 00:19:08.571 00:19:09.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:09.508 Nvme0n1 : 4.00 37024.50 144.63 0.00 0.00 0.00 0.00 0.00 00:19:09.508 =================================================================================================================== 00:19:09.508 Total : 37024.50 144.63 0.00 0.00 0.00 0.00 0.00 00:19:09.508 00:19:10.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:10.883 Nvme0n1 : 5.00 37138.60 145.07 0.00 0.00 0.00 0.00 0.00 00:19:10.883 =================================================================================================================== 00:19:10.883 Total : 37138.60 145.07 0.00 0.00 0.00 0.00 0.00 00:19:10.883 00:19:11.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:11.819 Nvme0n1 : 6.00 37226.17 145.41 0.00 0.00 0.00 0.00 0.00 00:19:11.819 =================================================================================================================== 00:19:11.819 Total : 37226.17 145.41 0.00 0.00 0.00 0.00 0.00 00:19:11.819 00:19:12.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.755 Nvme0n1 : 7.00 37285.00 145.64 0.00 0.00 0.00 0.00 0.00 00:19:12.755 =================================================================================================================== 00:19:12.755 Total : 37285.00 145.64 0.00 0.00 0.00 0.00 0.00 00:19:12.755 00:19:13.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:13.692 Nvme0n1 : 8.00 37323.62 145.80 0.00 0.00 0.00 0.00 0.00 00:19:13.692 =================================================================================================================== 00:19:13.692 Total : 37323.62 145.80 0.00 0.00 0.00 0.00 0.00 00:19:13.692 00:19:14.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:14.629 Nvme0n1 : 9.00 37365.00 145.96 0.00 0.00 0.00 0.00 0.00 00:19:14.629 =================================================================================================================== 00:19:14.629 Total : 37365.00 145.96 0.00 0.00 0.00 0.00 0.00 00:19:14.629 00:19:15.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.567 Nvme0n1 : 10.00 37343.70 145.87 0.00 0.00 0.00 0.00 0.00 00:19:15.567 =================================================================================================================== 00:19:15.567 Total : 37343.70 145.87 0.00 0.00 0.00 0.00 0.00 00:19:15.567 00:19:15.567 00:19:15.567 Latency(us) 00:19:15.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.567 Nvme0n1 : 10.00 37343.41 145.87 0.00 0.00 3425.03 2319.97 14260.63 00:19:15.567 =================================================================================================================== 00:19:15.567 Total : 37343.41 145.87 0.00 0.00 3425.03 2319.97 14260.63 00:19:15.567 0 00:19:15.567 21:22:50 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1687446 00:19:15.567 21:22:50 -- common/autotest_common.sh@926 -- # '[' -z 1687446 ']' 00:19:15.567 21:22:50 -- common/autotest_common.sh@930 -- # kill -0 1687446 00:19:15.567 21:22:50 -- common/autotest_common.sh@931 -- # uname 00:19:15.567 21:22:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.567 21:22:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1687446 00:19:15.826 21:22:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:15.826 21:22:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:15.826 21:22:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1687446' 00:19:15.826 killing process with pid 1687446 00:19:15.826 21:22:50 -- common/autotest_common.sh@945 -- # kill 1687446 00:19:15.826 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.826 00:19:15.826 Latency(us) 00:19:15.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.827 =================================================================================================================== 00:19:15.827 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.827 21:22:50 -- common/autotest_common.sh@950 -- # wait 1687446 00:19:15.827 21:22:50 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:16.085 21:22:50 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:16.085 21:22:50 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:16.344 21:22:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:16.344 21:22:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:16.344 21:22:51 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1684110 00:19:16.344 21:22:51 -- target/nvmf_lvs_grow.sh@74 -- # wait 1684110 00:19:16.344 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1684110 Killed "${NVMF_APP[@]}" "$@" 00:19:16.344 21:22:51 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:16.344 21:22:51 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:16.344 21:22:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:16.344 21:22:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:16.344 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:19:16.344 21:22:51 -- nvmf/common.sh@469 -- # nvmfpid=1689368 00:19:16.344 21:22:51 -- nvmf/common.sh@470 -- # waitforlisten 1689368 00:19:16.344 21:22:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:16.344 21:22:51 -- common/autotest_common.sh@819 -- # '[' -z 1689368 ']' 00:19:16.344 21:22:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.344 21:22:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:16.344 21:22:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.344 21:22:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:16.344 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:19:16.344 [2024-07-26 21:22:51.089908] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:16.344 [2024-07-26 21:22:51.089960] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.344 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.344 [2024-07-26 21:22:51.178504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.603 [2024-07-26 21:22:51.215271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:16.603 [2024-07-26 21:22:51.215376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.603 [2024-07-26 21:22:51.215385] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.603 [2024-07-26 21:22:51.215395] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.603 [2024-07-26 21:22:51.215412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.202 21:22:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:17.202 21:22:51 -- common/autotest_common.sh@852 -- # return 0 00:19:17.202 21:22:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:17.202 21:22:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:17.202 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:19:17.202 21:22:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.202 21:22:51 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:17.462 [2024-07-26 21:22:52.079414] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:17.462 [2024-07-26 21:22:52.079500] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:17.462 [2024-07-26 21:22:52.079526] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:17.462 21:22:52 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:17.462 21:22:52 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 2a1d2182-74cb-4b9a-871b-6d71ab9ab86d 00:19:17.462 21:22:52 -- common/autotest_common.sh@887 -- # local bdev_name=2a1d2182-74cb-4b9a-871b-6d71ab9ab86d 00:19:17.462 21:22:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:17.462 21:22:52 -- common/autotest_common.sh@889 -- # local i 00:19:17.462 21:22:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:17.462 21:22:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:17.462 21:22:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:17.462 21:22:52 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2a1d2182-74cb-4b9a-871b-6d71ab9ab86d -t 2000 00:19:17.721 [ 00:19:17.721 { 00:19:17.721 "name": "2a1d2182-74cb-4b9a-871b-6d71ab9ab86d", 00:19:17.721 "aliases": [ 00:19:17.721 "lvs/lvol" 00:19:17.721 ], 00:19:17.721 "product_name": "Logical Volume", 00:19:17.721 "block_size": 4096, 00:19:17.721 "num_blocks": 38912, 00:19:17.721 "uuid": "2a1d2182-74cb-4b9a-871b-6d71ab9ab86d", 00:19:17.721 "assigned_rate_limits": { 00:19:17.721 "rw_ios_per_sec": 0, 00:19:17.721 "rw_mbytes_per_sec": 0, 00:19:17.721 "r_mbytes_per_sec": 0, 00:19:17.721 "w_mbytes_per_sec": 0 00:19:17.721 }, 00:19:17.721 "claimed": false, 00:19:17.721 "zoned": false, 00:19:17.721 "supported_io_types": { 00:19:17.721 "read": true, 00:19:17.721 "write": true, 00:19:17.721 "unmap": true, 00:19:17.721 "write_zeroes": true, 00:19:17.721 "flush": false, 00:19:17.721 "reset": true, 00:19:17.721 "compare": false, 00:19:17.721 "compare_and_write": false, 00:19:17.721 "abort": false, 00:19:17.721 "nvme_admin": false, 00:19:17.721 "nvme_io": false 00:19:17.721 }, 00:19:17.721 "driver_specific": { 00:19:17.721 "lvol": { 00:19:17.721 "lvol_store_uuid": "c063dbec-1b4e-42b1-9482-be2def491755", 00:19:17.721 "base_bdev": "aio_bdev", 00:19:17.721 "thin_provision": false, 00:19:17.721 "snapshot": false, 00:19:17.721 "clone": false, 00:19:17.721 "esnap_clone": false 00:19:17.721 } 00:19:17.721 } 00:19:17.721 } 00:19:17.721 ] 00:19:17.721 21:22:52 -- common/autotest_common.sh@895 -- # return 0 00:19:17.721 21:22:52 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:17.721 21:22:52 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:17.980 21:22:52 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:17.980 21:22:52 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:17.980 21:22:52 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:17.980 21:22:52 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:17.980 21:22:52 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:18.239 [2024-07-26 21:22:52.915664] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:18.239 21:22:52 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:18.239 21:22:52 -- common/autotest_common.sh@640 -- # local es=0 00:19:18.239 21:22:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:18.239 21:22:52 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:18.239 21:22:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.240 21:22:52 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:18.240 21:22:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.240 21:22:52 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:18.240 21:22:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.240 21:22:52 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:18.240 21:22:52 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:18.240 21:22:52 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:18.240 request: 00:19:18.240 { 00:19:18.240 "uuid": "c063dbec-1b4e-42b1-9482-be2def491755", 00:19:18.240 "method": "bdev_lvol_get_lvstores", 00:19:18.240 "req_id": 1 00:19:18.240 } 00:19:18.240 Got JSON-RPC error response 00:19:18.240 response: 00:19:18.240 { 00:19:18.240 "code": -19, 00:19:18.240 "message": "No such device" 00:19:18.240 } 00:19:18.499 21:22:53 -- common/autotest_common.sh@643 -- # es=1 00:19:18.499 21:22:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:18.499 21:22:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:18.499 21:22:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:18.499 21:22:53 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:18.499 aio_bdev 00:19:18.499 21:22:53 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2a1d2182-74cb-4b9a-871b-6d71ab9ab86d 00:19:18.499 21:22:53 -- common/autotest_common.sh@887 -- # local bdev_name=2a1d2182-74cb-4b9a-871b-6d71ab9ab86d 00:19:18.499 21:22:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:18.499 21:22:53 -- common/autotest_common.sh@889 -- # local i 00:19:18.499 21:22:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:18.499 21:22:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:18.499 21:22:53 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:18.758 21:22:53 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2a1d2182-74cb-4b9a-871b-6d71ab9ab86d -t 2000 00:19:18.758 [ 00:19:18.758 { 00:19:18.758 "name": "2a1d2182-74cb-4b9a-871b-6d71ab9ab86d", 00:19:18.758 "aliases": [ 00:19:18.758 "lvs/lvol" 00:19:18.758 ], 00:19:18.758 "product_name": "Logical Volume", 00:19:18.758 "block_size": 4096, 00:19:18.758 "num_blocks": 38912, 00:19:18.758 "uuid": "2a1d2182-74cb-4b9a-871b-6d71ab9ab86d", 00:19:18.758 "assigned_rate_limits": { 00:19:18.758 "rw_ios_per_sec": 0, 00:19:18.758 "rw_mbytes_per_sec": 0, 00:19:18.758 "r_mbytes_per_sec": 0, 00:19:18.758 "w_mbytes_per_sec": 0 00:19:18.758 }, 00:19:18.758 "claimed": false, 00:19:18.758 "zoned": false, 00:19:18.758 "supported_io_types": { 00:19:18.758 "read": true, 00:19:18.758 "write": true, 00:19:18.758 "unmap": true, 00:19:18.758 "write_zeroes": true, 00:19:18.758 "flush": false, 00:19:18.758 "reset": true, 00:19:18.758 "compare": false, 00:19:18.758 "compare_and_write": false, 00:19:18.758 "abort": false, 00:19:18.758 "nvme_admin": false, 00:19:18.758 "nvme_io": false 00:19:18.758 }, 00:19:18.758 "driver_specific": { 00:19:18.758 "lvol": { 00:19:18.758 "lvol_store_uuid": "c063dbec-1b4e-42b1-9482-be2def491755", 00:19:18.758 "base_bdev": "aio_bdev", 00:19:18.758 "thin_provision": false, 00:19:18.758 "snapshot": false, 00:19:18.758 "clone": false, 00:19:18.758 "esnap_clone": false 00:19:18.758 } 00:19:18.758 } 00:19:18.758 } 00:19:18.758 ] 00:19:18.758 21:22:53 -- common/autotest_common.sh@895 -- # return 0 00:19:18.758 21:22:53 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:18.758 21:22:53 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:19.017 21:22:53 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:19.017 21:22:53 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:19.017 21:22:53 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:19.276 21:22:53 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:19.276 21:22:53 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2a1d2182-74cb-4b9a-871b-6d71ab9ab86d 00:19:19.276 21:22:54 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c063dbec-1b4e-42b1-9482-be2def491755 00:19:19.535 21:22:54 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:19.794 21:22:54 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:19.794 00:19:19.794 real 0m17.093s 00:19:19.794 user 0m44.117s 00:19:19.794 sys 0m3.465s 00:19:19.794 21:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.794 21:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:19.794 ************************************ 00:19:19.794 END TEST lvs_grow_dirty 00:19:19.794 ************************************ 00:19:19.794 21:22:54 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:19.794 21:22:54 -- common/autotest_common.sh@796 -- # type=--id 00:19:19.794 21:22:54 -- common/autotest_common.sh@797 -- # id=0 00:19:19.794 21:22:54 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:19.794 21:22:54 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:19.794 21:22:54 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:19.794 21:22:54 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:19.794 21:22:54 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:19.794 21:22:54 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:19.794 nvmf_trace.0 00:19:19.794 21:22:54 -- common/autotest_common.sh@811 -- # return 0 00:19:19.794 21:22:54 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:19.794 21:22:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:19.794 21:22:54 -- nvmf/common.sh@116 -- # sync 00:19:19.794 21:22:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:19.794 21:22:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:19.794 21:22:54 -- nvmf/common.sh@119 -- # set +e 00:19:19.794 21:22:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:19.794 21:22:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:19.794 rmmod nvme_rdma 00:19:19.794 rmmod nvme_fabrics 00:19:19.794 21:22:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:19.794 21:22:54 -- nvmf/common.sh@123 -- # set -e 00:19:19.794 21:22:54 -- nvmf/common.sh@124 -- # return 0 00:19:19.794 21:22:54 -- nvmf/common.sh@477 -- # '[' -n 1689368 ']' 00:19:19.794 21:22:54 -- nvmf/common.sh@478 -- # killprocess 1689368 00:19:19.794 21:22:54 -- common/autotest_common.sh@926 -- # '[' -z 1689368 ']' 00:19:19.794 21:22:54 -- common/autotest_common.sh@930 -- # kill -0 1689368 00:19:19.794 21:22:54 -- common/autotest_common.sh@931 -- # uname 00:19:19.794 21:22:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:19.794 21:22:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1689368 00:19:19.794 21:22:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:20.053 21:22:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:20.053 21:22:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1689368' 00:19:20.053 killing process with pid 1689368 00:19:20.053 21:22:54 -- common/autotest_common.sh@945 -- # kill 1689368 00:19:20.053 21:22:54 -- common/autotest_common.sh@950 -- # wait 1689368 00:19:20.053 21:22:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:20.053 21:22:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:20.053 00:19:20.053 real 0m42.067s 00:19:20.053 user 1m5.553s 00:19:20.053 sys 0m11.174s 00:19:20.053 21:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.053 21:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:20.053 ************************************ 00:19:20.053 END TEST nvmf_lvs_grow 00:19:20.053 ************************************ 00:19:20.053 21:22:54 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:20.053 21:22:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:20.053 21:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:20.053 21:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:20.053 ************************************ 00:19:20.053 START TEST nvmf_bdev_io_wait 00:19:20.053 ************************************ 00:19:20.053 21:22:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:20.312 * Looking for test storage... 00:19:20.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:20.312 21:22:54 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.312 21:22:54 -- nvmf/common.sh@7 -- # uname -s 00:19:20.312 21:22:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.312 21:22:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.312 21:22:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.312 21:22:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.312 21:22:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.312 21:22:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.312 21:22:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.312 21:22:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.312 21:22:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.312 21:22:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.312 21:22:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:20.312 21:22:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:20.312 21:22:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.312 21:22:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.312 21:22:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.312 21:22:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:20.312 21:22:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.312 21:22:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.312 21:22:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.312 21:22:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.312 21:22:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.312 21:22:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.312 21:22:54 -- paths/export.sh@5 -- # export PATH 00:19:20.313 21:22:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.313 21:22:54 -- nvmf/common.sh@46 -- # : 0 00:19:20.313 21:22:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:20.313 21:22:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:20.313 21:22:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:20.313 21:22:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.313 21:22:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.313 21:22:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:20.313 21:22:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:20.313 21:22:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:20.313 21:22:54 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:20.313 21:22:54 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:20.313 21:22:54 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:20.313 21:22:54 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:20.313 21:22:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.313 21:22:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:20.313 21:22:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:20.313 21:22:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:20.313 21:22:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.313 21:22:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.313 21:22:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.313 21:22:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:20.313 21:22:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:20.313 21:22:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:20.313 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:19:28.435 21:23:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.435 21:23:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:28.435 21:23:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:28.435 21:23:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:28.435 21:23:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:28.435 21:23:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:28.435 21:23:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:28.435 21:23:02 -- nvmf/common.sh@294 -- # net_devs=() 00:19:28.435 21:23:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:28.435 21:23:02 -- nvmf/common.sh@295 -- # e810=() 00:19:28.435 21:23:02 -- nvmf/common.sh@295 -- # local -ga e810 00:19:28.435 21:23:02 -- nvmf/common.sh@296 -- # x722=() 00:19:28.435 21:23:02 -- nvmf/common.sh@296 -- # local -ga x722 00:19:28.435 21:23:02 -- nvmf/common.sh@297 -- # mlx=() 00:19:28.435 21:23:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:28.435 21:23:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.435 21:23:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:28.435 21:23:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:28.435 21:23:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:28.435 21:23:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:28.435 21:23:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:28.435 21:23:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:28.435 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:28.435 21:23:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.435 21:23:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:28.435 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:28.435 21:23:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.435 21:23:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:28.435 21:23:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.435 21:23:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.435 21:23:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.435 21:23:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:28.435 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:28.435 21:23:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.435 21:23:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.435 21:23:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.435 21:23:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.435 21:23:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:28.435 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:28.435 21:23:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.435 21:23:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:28.435 21:23:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:28.435 21:23:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:28.435 21:23:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:28.435 21:23:02 -- nvmf/common.sh@57 -- # uname 00:19:28.435 21:23:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:28.435 21:23:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:28.435 21:23:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:28.435 21:23:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:28.435 21:23:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:28.435 21:23:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:28.435 21:23:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:28.435 21:23:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:28.435 21:23:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:28.435 21:23:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.435 21:23:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:28.435 21:23:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.435 21:23:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:28.435 21:23:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:28.435 21:23:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.435 21:23:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:28.435 21:23:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:28.435 21:23:02 -- nvmf/common.sh@104 -- # continue 2 00:19:28.435 21:23:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.435 21:23:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:28.435 21:23:02 -- nvmf/common.sh@104 -- # continue 2 00:19:28.435 21:23:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:28.435 21:23:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:28.435 21:23:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:28.435 21:23:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:28.435 21:23:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.435 21:23:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.435 21:23:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:28.435 21:23:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:28.435 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.435 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:28.435 altname enp217s0f0np0 00:19:28.435 altname ens818f0np0 00:19:28.435 inet 192.168.100.8/24 scope global mlx_0_0 00:19:28.435 valid_lft forever preferred_lft forever 00:19:28.435 21:23:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:28.435 21:23:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:28.435 21:23:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:28.435 21:23:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:28.435 21:23:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.435 21:23:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.435 21:23:02 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:28.435 21:23:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:28.435 21:23:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:28.435 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.435 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:28.435 altname enp217s0f1np1 00:19:28.435 altname ens818f1np1 00:19:28.435 inet 192.168.100.9/24 scope global mlx_0_1 00:19:28.435 valid_lft forever preferred_lft forever 00:19:28.435 21:23:02 -- nvmf/common.sh@410 -- # return 0 00:19:28.436 21:23:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:28.436 21:23:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.436 21:23:02 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:28.436 21:23:02 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:28.436 21:23:02 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:28.436 21:23:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.436 21:23:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:28.436 21:23:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:28.436 21:23:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.436 21:23:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:28.436 21:23:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.436 21:23:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.436 21:23:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.436 21:23:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:28.436 21:23:02 -- nvmf/common.sh@104 -- # continue 2 00:19:28.436 21:23:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.436 21:23:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.436 21:23:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.436 21:23:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.436 21:23:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.436 21:23:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:28.436 21:23:02 -- nvmf/common.sh@104 -- # continue 2 00:19:28.436 21:23:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:28.436 21:23:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:28.436 21:23:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:28.436 21:23:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:28.436 21:23:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.436 21:23:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.436 21:23:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:28.436 21:23:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:28.436 21:23:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:28.436 21:23:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:28.436 21:23:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.436 21:23:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.436 21:23:02 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.436 192.168.100.9' 00:19:28.436 21:23:02 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:28.436 192.168.100.9' 00:19:28.436 21:23:02 -- nvmf/common.sh@445 -- # head -n 1 00:19:28.436 21:23:02 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.436 21:23:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:28.436 192.168.100.9' 00:19:28.436 21:23:02 -- nvmf/common.sh@446 -- # tail -n +2 00:19:28.436 21:23:02 -- nvmf/common.sh@446 -- # head -n 1 00:19:28.436 21:23:02 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.436 21:23:02 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:28.436 21:23:02 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.436 21:23:02 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:28.436 21:23:02 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:28.436 21:23:02 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:28.436 21:23:02 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:28.436 21:23:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:28.436 21:23:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:28.436 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:19:28.436 21:23:02 -- nvmf/common.sh@469 -- # nvmfpid=1694052 00:19:28.436 21:23:02 -- nvmf/common.sh@470 -- # waitforlisten 1694052 00:19:28.436 21:23:02 -- common/autotest_common.sh@819 -- # '[' -z 1694052 ']' 00:19:28.436 21:23:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.436 21:23:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:28.436 21:23:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.436 21:23:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:28.436 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:19:28.436 21:23:02 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:28.436 [2024-07-26 21:23:02.765498] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:28.436 [2024-07-26 21:23:02.765546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.436 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.436 [2024-07-26 21:23:02.851714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.436 [2024-07-26 21:23:02.891762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:28.436 [2024-07-26 21:23:02.891894] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.436 [2024-07-26 21:23:02.891904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.436 [2024-07-26 21:23:02.891914] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.436 [2024-07-26 21:23:02.891958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.436 [2024-07-26 21:23:02.892058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.436 [2024-07-26 21:23:02.892140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.436 [2024-07-26 21:23:02.892142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.694 21:23:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:28.694 21:23:03 -- common/autotest_common.sh@852 -- # return 0 00:19:28.694 21:23:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:28.694 21:23:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:28.694 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:28.952 21:23:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.952 21:23:03 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:28.952 21:23:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.952 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:28.952 21:23:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.952 21:23:03 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:28.952 21:23:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.952 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:28.952 21:23:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.952 21:23:03 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:28.952 21:23:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.952 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:28.952 [2024-07-26 21:23:03.691221] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21f6fc0/0x21fb4b0) succeed. 00:19:28.952 [2024-07-26 21:23:03.701108] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21f85b0/0x223cb40) succeed. 00:19:29.211 21:23:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.211 21:23:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.211 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.211 Malloc0 00:19:29.211 21:23:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.211 21:23:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.211 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.211 21:23:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.211 21:23:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.211 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.211 21:23:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:29.211 21:23:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.211 21:23:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.211 [2024-07-26 21:23:03.866373] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.211 21:23:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1694184 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@30 -- # READ_PID=1694185 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:29.211 21:23:03 -- nvmf/common.sh@520 -- # config=() 00:19:29.211 21:23:03 -- nvmf/common.sh@520 -- # local subsystem config 00:19:29.211 21:23:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.211 21:23:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.211 { 00:19:29.211 "params": { 00:19:29.211 "name": "Nvme$subsystem", 00:19:29.211 "trtype": "$TEST_TRANSPORT", 00:19:29.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.211 "adrfam": "ipv4", 00:19:29.211 "trsvcid": "$NVMF_PORT", 00:19:29.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.211 "hdgst": ${hdgst:-false}, 00:19:29.211 "ddgst": ${ddgst:-false} 00:19:29.211 }, 00:19:29.211 "method": "bdev_nvme_attach_controller" 00:19:29.211 } 00:19:29.211 EOF 00:19:29.211 )") 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1694187 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1694190 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@35 -- # sync 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:29.211 21:23:03 -- nvmf/common.sh@520 -- # config=() 00:19:29.211 21:23:03 -- nvmf/common.sh@520 -- # local subsystem config 00:19:29.211 21:23:03 -- nvmf/common.sh@520 -- # config=() 00:19:29.211 21:23:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.211 21:23:03 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:29.211 21:23:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.211 { 00:19:29.211 "params": { 00:19:29.211 "name": "Nvme$subsystem", 00:19:29.211 "trtype": "$TEST_TRANSPORT", 00:19:29.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.211 "adrfam": "ipv4", 00:19:29.211 "trsvcid": "$NVMF_PORT", 00:19:29.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.211 "hdgst": ${hdgst:-false}, 00:19:29.211 "ddgst": ${ddgst:-false} 00:19:29.211 }, 00:19:29.211 "method": "bdev_nvme_attach_controller" 00:19:29.211 } 00:19:29.211 EOF 00:19:29.211 )") 00:19:29.211 21:23:03 -- nvmf/common.sh@520 -- # local subsystem config 00:19:29.211 21:23:03 -- nvmf/common.sh@542 -- # cat 00:19:29.211 21:23:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.211 21:23:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.211 { 00:19:29.211 "params": { 00:19:29.211 "name": "Nvme$subsystem", 00:19:29.211 "trtype": "$TEST_TRANSPORT", 00:19:29.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.211 "adrfam": "ipv4", 00:19:29.211 "trsvcid": "$NVMF_PORT", 00:19:29.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.211 "hdgst": ${hdgst:-false}, 00:19:29.211 "ddgst": ${ddgst:-false} 00:19:29.211 }, 00:19:29.211 "method": "bdev_nvme_attach_controller" 00:19:29.212 } 00:19:29.212 EOF 00:19:29.212 )") 00:19:29.212 21:23:03 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:29.212 21:23:03 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:29.212 21:23:03 -- nvmf/common.sh@520 -- # config=() 00:19:29.212 21:23:03 -- nvmf/common.sh@520 -- # local subsystem config 00:19:29.212 21:23:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.212 21:23:03 -- nvmf/common.sh@542 -- # cat 00:19:29.212 21:23:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.212 { 00:19:29.212 "params": { 00:19:29.212 "name": "Nvme$subsystem", 00:19:29.212 "trtype": "$TEST_TRANSPORT", 00:19:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.212 "adrfam": "ipv4", 00:19:29.212 "trsvcid": "$NVMF_PORT", 00:19:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.212 "hdgst": ${hdgst:-false}, 00:19:29.212 "ddgst": ${ddgst:-false} 00:19:29.212 }, 00:19:29.212 "method": "bdev_nvme_attach_controller" 00:19:29.212 } 00:19:29.212 EOF 00:19:29.212 )") 00:19:29.212 21:23:03 -- target/bdev_io_wait.sh@37 -- # wait 1694184 00:19:29.212 21:23:03 -- nvmf/common.sh@542 -- # cat 00:19:29.212 21:23:03 -- nvmf/common.sh@542 -- # cat 00:19:29.212 21:23:03 -- nvmf/common.sh@544 -- # jq . 00:19:29.212 21:23:03 -- nvmf/common.sh@544 -- # jq . 00:19:29.212 21:23:03 -- nvmf/common.sh@544 -- # jq . 00:19:29.212 21:23:03 -- nvmf/common.sh@545 -- # IFS=, 00:19:29.212 21:23:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:29.212 "params": { 00:19:29.212 "name": "Nvme1", 00:19:29.212 "trtype": "rdma", 00:19:29.212 "traddr": "192.168.100.8", 00:19:29.212 "adrfam": "ipv4", 00:19:29.212 "trsvcid": "4420", 00:19:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.212 "hdgst": false, 00:19:29.212 "ddgst": false 00:19:29.212 }, 00:19:29.212 "method": "bdev_nvme_attach_controller" 00:19:29.212 }' 00:19:29.212 21:23:03 -- nvmf/common.sh@544 -- # jq . 00:19:29.212 21:23:03 -- nvmf/common.sh@545 -- # IFS=, 00:19:29.212 21:23:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:29.212 "params": { 00:19:29.212 "name": "Nvme1", 00:19:29.212 "trtype": "rdma", 00:19:29.212 "traddr": "192.168.100.8", 00:19:29.212 "adrfam": "ipv4", 00:19:29.212 "trsvcid": "4420", 00:19:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.212 "hdgst": false, 00:19:29.212 "ddgst": false 00:19:29.212 }, 00:19:29.212 "method": "bdev_nvme_attach_controller" 00:19:29.212 }' 00:19:29.212 21:23:03 -- nvmf/common.sh@545 -- # IFS=, 00:19:29.212 21:23:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:29.212 "params": { 00:19:29.212 "name": "Nvme1", 00:19:29.212 "trtype": "rdma", 00:19:29.212 "traddr": "192.168.100.8", 00:19:29.212 "adrfam": "ipv4", 00:19:29.212 "trsvcid": "4420", 00:19:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.212 "hdgst": false, 00:19:29.212 "ddgst": false 00:19:29.212 }, 00:19:29.212 "method": "bdev_nvme_attach_controller" 00:19:29.212 }' 00:19:29.212 21:23:03 -- nvmf/common.sh@545 -- # IFS=, 00:19:29.212 21:23:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:29.212 "params": { 00:19:29.212 "name": "Nvme1", 00:19:29.212 "trtype": "rdma", 00:19:29.212 "traddr": "192.168.100.8", 00:19:29.212 "adrfam": "ipv4", 00:19:29.212 "trsvcid": "4420", 00:19:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.212 "hdgst": false, 00:19:29.212 "ddgst": false 00:19:29.212 }, 00:19:29.212 "method": "bdev_nvme_attach_controller" 00:19:29.212 }' 00:19:29.212 [2024-07-26 21:23:03.915364] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:29.212 [2024-07-26 21:23:03.915414] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:29.212 [2024-07-26 21:23:03.916145] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:29.212 [2024-07-26 21:23:03.916191] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:29.212 [2024-07-26 21:23:03.916253] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:29.212 [2024-07-26 21:23:03.916307] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:29.212 [2024-07-26 21:23:03.916602] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:29.212 [2024-07-26 21:23:03.916668] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:29.212 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.212 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.471 [2024-07-26 21:23:04.091696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.471 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.471 [2024-07-26 21:23:04.114141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:29.471 [2024-07-26 21:23:04.153300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.471 [2024-07-26 21:23:04.175155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:29.471 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.471 [2024-07-26 21:23:04.251620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.471 [2024-07-26 21:23:04.275321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:29.729 [2024-07-26 21:23:04.344992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.729 [2024-07-26 21:23:04.372730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:29.729 Running I/O for 1 seconds... 00:19:29.729 Running I/O for 1 seconds... 00:19:29.729 Running I/O for 1 seconds... 00:19:29.729 Running I/O for 1 seconds... 00:19:30.665 00:19:30.665 Latency(us) 00:19:30.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.665 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:30.665 Nvme1n1 : 1.00 266379.00 1040.54 0.00 0.00 479.35 190.05 1690.83 00:19:30.665 =================================================================================================================== 00:19:30.665 Total : 266379.00 1040.54 0.00 0.00 479.35 190.05 1690.83 00:19:30.665 00:19:30.665 Latency(us) 00:19:30.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.665 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:30.665 Nvme1n1 : 1.01 17375.28 67.87 0.00 0.00 7344.46 4089.45 15623.78 00:19:30.665 =================================================================================================================== 00:19:30.665 Total : 17375.28 67.87 0.00 0.00 7344.46 4089.45 15623.78 00:19:30.665 00:19:30.665 Latency(us) 00:19:30.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.665 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:30.665 Nvme1n1 : 1.00 15752.90 61.53 0.00 0.00 8101.55 4823.45 20132.66 00:19:30.665 =================================================================================================================== 00:19:30.665 Total : 15752.90 61.53 0.00 0.00 8101.55 4823.45 20132.66 00:19:30.665 00:19:30.665 Latency(us) 00:19:30.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.665 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:30.665 Nvme1n1 : 1.00 16895.56 66.00 0.00 0.00 7554.70 4771.02 19084.08 00:19:30.665 =================================================================================================================== 00:19:30.665 Total : 16895.56 66.00 0.00 0.00 7554.70 4771.02 19084.08 00:19:31.232 21:23:05 -- target/bdev_io_wait.sh@38 -- # wait 1694185 00:19:31.232 21:23:05 -- target/bdev_io_wait.sh@39 -- # wait 1694187 00:19:31.232 21:23:05 -- target/bdev_io_wait.sh@40 -- # wait 1694190 00:19:31.232 21:23:05 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.232 21:23:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.232 21:23:05 -- common/autotest_common.sh@10 -- # set +x 00:19:31.232 21:23:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.232 21:23:05 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:31.232 21:23:05 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:31.232 21:23:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:31.232 21:23:05 -- nvmf/common.sh@116 -- # sync 00:19:31.232 21:23:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:31.232 21:23:05 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:31.232 21:23:05 -- nvmf/common.sh@119 -- # set +e 00:19:31.232 21:23:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:31.232 21:23:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:31.232 rmmod nvme_rdma 00:19:31.232 rmmod nvme_fabrics 00:19:31.232 21:23:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:31.232 21:23:05 -- nvmf/common.sh@123 -- # set -e 00:19:31.232 21:23:05 -- nvmf/common.sh@124 -- # return 0 00:19:31.232 21:23:05 -- nvmf/common.sh@477 -- # '[' -n 1694052 ']' 00:19:31.232 21:23:05 -- nvmf/common.sh@478 -- # killprocess 1694052 00:19:31.232 21:23:05 -- common/autotest_common.sh@926 -- # '[' -z 1694052 ']' 00:19:31.232 21:23:05 -- common/autotest_common.sh@930 -- # kill -0 1694052 00:19:31.232 21:23:05 -- common/autotest_common.sh@931 -- # uname 00:19:31.232 21:23:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:31.232 21:23:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1694052 00:19:31.232 21:23:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:31.232 21:23:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:31.232 21:23:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1694052' 00:19:31.232 killing process with pid 1694052 00:19:31.232 21:23:05 -- common/autotest_common.sh@945 -- # kill 1694052 00:19:31.232 21:23:05 -- common/autotest_common.sh@950 -- # wait 1694052 00:19:31.491 21:23:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.491 21:23:06 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:31.491 00:19:31.491 real 0m11.347s 00:19:31.491 user 0m20.765s 00:19:31.491 sys 0m7.356s 00:19:31.491 21:23:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:31.491 21:23:06 -- common/autotest_common.sh@10 -- # set +x 00:19:31.491 ************************************ 00:19:31.491 END TEST nvmf_bdev_io_wait 00:19:31.491 ************************************ 00:19:31.491 21:23:06 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:31.491 21:23:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:31.491 21:23:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:31.491 21:23:06 -- common/autotest_common.sh@10 -- # set +x 00:19:31.491 ************************************ 00:19:31.491 START TEST nvmf_queue_depth 00:19:31.491 ************************************ 00:19:31.491 21:23:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:31.750 * Looking for test storage... 00:19:31.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:31.750 21:23:06 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.750 21:23:06 -- nvmf/common.sh@7 -- # uname -s 00:19:31.750 21:23:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.750 21:23:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.750 21:23:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.750 21:23:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.750 21:23:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.750 21:23:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.750 21:23:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.750 21:23:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.750 21:23:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.750 21:23:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.750 21:23:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:31.750 21:23:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:31.750 21:23:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.751 21:23:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.751 21:23:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.751 21:23:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:31.751 21:23:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.751 21:23:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.751 21:23:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.751 21:23:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.751 21:23:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.751 21:23:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.751 21:23:06 -- paths/export.sh@5 -- # export PATH 00:19:31.751 21:23:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.751 21:23:06 -- nvmf/common.sh@46 -- # : 0 00:19:31.751 21:23:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:31.751 21:23:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:31.751 21:23:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:31.751 21:23:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.751 21:23:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.751 21:23:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:31.751 21:23:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:31.751 21:23:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:31.751 21:23:06 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:31.751 21:23:06 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:31.751 21:23:06 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.751 21:23:06 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:31.751 21:23:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:31.751 21:23:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.751 21:23:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:31.751 21:23:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:31.751 21:23:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:31.751 21:23:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.751 21:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.751 21:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.751 21:23:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:31.751 21:23:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:31.751 21:23:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:31.751 21:23:06 -- common/autotest_common.sh@10 -- # set +x 00:19:39.872 21:23:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:39.872 21:23:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:39.872 21:23:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:39.872 21:23:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:39.872 21:23:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:39.872 21:23:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:39.872 21:23:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:39.872 21:23:14 -- nvmf/common.sh@294 -- # net_devs=() 00:19:39.872 21:23:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:39.872 21:23:14 -- nvmf/common.sh@295 -- # e810=() 00:19:39.872 21:23:14 -- nvmf/common.sh@295 -- # local -ga e810 00:19:39.872 21:23:14 -- nvmf/common.sh@296 -- # x722=() 00:19:39.872 21:23:14 -- nvmf/common.sh@296 -- # local -ga x722 00:19:39.872 21:23:14 -- nvmf/common.sh@297 -- # mlx=() 00:19:39.872 21:23:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:39.872 21:23:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.872 21:23:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:39.872 21:23:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:39.872 21:23:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:39.872 21:23:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:39.872 21:23:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:39.872 21:23:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:39.872 21:23:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:39.872 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:39.872 21:23:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:39.872 21:23:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:39.872 21:23:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:39.872 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:39.872 21:23:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:39.872 21:23:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:39.872 21:23:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:39.872 21:23:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.872 21:23:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:39.872 21:23:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.872 21:23:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:39.872 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:39.872 21:23:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.872 21:23:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:39.872 21:23:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.872 21:23:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:39.872 21:23:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.872 21:23:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:39.872 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:39.872 21:23:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.872 21:23:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:39.872 21:23:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:39.872 21:23:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:39.872 21:23:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:39.872 21:23:14 -- nvmf/common.sh@57 -- # uname 00:19:39.872 21:23:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:39.872 21:23:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:39.872 21:23:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:39.872 21:23:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:39.872 21:23:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:39.872 21:23:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:39.872 21:23:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:39.872 21:23:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:39.872 21:23:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:39.872 21:23:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:39.872 21:23:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:39.872 21:23:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:39.872 21:23:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:39.872 21:23:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:39.872 21:23:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:39.872 21:23:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:39.872 21:23:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.872 21:23:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.872 21:23:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:39.872 21:23:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:39.872 21:23:14 -- nvmf/common.sh@104 -- # continue 2 00:19:39.872 21:23:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.872 21:23:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.873 21:23:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.873 21:23:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@104 -- # continue 2 00:19:39.873 21:23:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:39.873 21:23:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:39.873 21:23:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.873 21:23:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:39.873 21:23:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:39.873 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:39.873 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:39.873 altname enp217s0f0np0 00:19:39.873 altname ens818f0np0 00:19:39.873 inet 192.168.100.8/24 scope global mlx_0_0 00:19:39.873 valid_lft forever preferred_lft forever 00:19:39.873 21:23:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:39.873 21:23:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.873 21:23:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:39.873 21:23:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:39.873 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:39.873 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:39.873 altname enp217s0f1np1 00:19:39.873 altname ens818f1np1 00:19:39.873 inet 192.168.100.9/24 scope global mlx_0_1 00:19:39.873 valid_lft forever preferred_lft forever 00:19:39.873 21:23:14 -- nvmf/common.sh@410 -- # return 0 00:19:39.873 21:23:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:39.873 21:23:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:39.873 21:23:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:39.873 21:23:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:39.873 21:23:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:39.873 21:23:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:39.873 21:23:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:39.873 21:23:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:39.873 21:23:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:39.873 21:23:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.873 21:23:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.873 21:23:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:39.873 21:23:14 -- nvmf/common.sh@104 -- # continue 2 00:19:39.873 21:23:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:39.873 21:23:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.873 21:23:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.873 21:23:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:39.873 21:23:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@104 -- # continue 2 00:19:39.873 21:23:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:39.873 21:23:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:39.873 21:23:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.873 21:23:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:39.873 21:23:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:39.873 21:23:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:39.873 21:23:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:39.873 192.168.100.9' 00:19:39.873 21:23:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:39.873 192.168.100.9' 00:19:39.873 21:23:14 -- nvmf/common.sh@445 -- # head -n 1 00:19:39.873 21:23:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:39.873 21:23:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:39.873 192.168.100.9' 00:19:39.873 21:23:14 -- nvmf/common.sh@446 -- # head -n 1 00:19:39.873 21:23:14 -- nvmf/common.sh@446 -- # tail -n +2 00:19:39.873 21:23:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:39.873 21:23:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:39.873 21:23:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:39.873 21:23:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:39.873 21:23:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:39.873 21:23:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:39.873 21:23:14 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:39.873 21:23:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:39.873 21:23:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:39.873 21:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:39.873 21:23:14 -- nvmf/common.sh@469 -- # nvmfpid=1698664 00:19:39.873 21:23:14 -- nvmf/common.sh@470 -- # waitforlisten 1698664 00:19:39.873 21:23:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.873 21:23:14 -- common/autotest_common.sh@819 -- # '[' -z 1698664 ']' 00:19:39.873 21:23:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.873 21:23:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:39.873 21:23:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.873 21:23:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:39.873 21:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:39.873 [2024-07-26 21:23:14.579591] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:39.873 [2024-07-26 21:23:14.579654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.873 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.873 [2024-07-26 21:23:14.665866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.873 [2024-07-26 21:23:14.703753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:39.873 [2024-07-26 21:23:14.703861] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.873 [2024-07-26 21:23:14.703871] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.873 [2024-07-26 21:23:14.703881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.873 [2024-07-26 21:23:14.703905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.811 21:23:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:40.811 21:23:15 -- common/autotest_common.sh@852 -- # return 0 00:19:40.811 21:23:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:40.811 21:23:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:40.811 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.811 21:23:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.811 21:23:15 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:40.811 21:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.811 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.811 [2024-07-26 21:23:15.438349] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1500250/0x1504740) succeed. 00:19:40.811 [2024-07-26 21:23:15.447252] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1501750/0x1545dd0) succeed. 00:19:40.811 21:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.811 21:23:15 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:40.811 21:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.811 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.811 Malloc0 00:19:40.811 21:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.811 21:23:15 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.811 21:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.811 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.811 21:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.811 21:23:15 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.811 21:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.812 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.812 21:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.812 21:23:15 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:40.812 21:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.812 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.812 [2024-07-26 21:23:15.531311] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:40.812 21:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.812 21:23:15 -- target/queue_depth.sh@30 -- # bdevperf_pid=1698931 00:19:40.812 21:23:15 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.812 21:23:15 -- target/queue_depth.sh@33 -- # waitforlisten 1698931 /var/tmp/bdevperf.sock 00:19:40.812 21:23:15 -- common/autotest_common.sh@819 -- # '[' -z 1698931 ']' 00:19:40.812 21:23:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.812 21:23:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:40.812 21:23:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.812 21:23:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:40.812 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:19:40.812 21:23:15 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:40.812 [2024-07-26 21:23:15.578502] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:40.812 [2024-07-26 21:23:15.578551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698931 ] 00:19:40.812 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.812 [2024-07-26 21:23:15.663028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.071 [2024-07-26 21:23:15.699497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.684 21:23:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:41.684 21:23:16 -- common/autotest_common.sh@852 -- # return 0 00:19:41.684 21:23:16 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:41.684 21:23:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.684 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:19:41.684 NVMe0n1 00:19:41.684 21:23:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.684 21:23:16 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.684 Running I/O for 10 seconds... 00:19:53.893 00:19:53.893 Latency(us) 00:19:53.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.893 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:53.893 Verification LBA range: start 0x0 length 0x4000 00:19:53.893 NVMe0n1 : 10.03 29500.63 115.24 0.00 0.00 34632.35 8074.04 33344.72 00:19:53.893 =================================================================================================================== 00:19:53.893 Total : 29500.63 115.24 0.00 0.00 34632.35 8074.04 33344.72 00:19:53.893 0 00:19:53.893 21:23:26 -- target/queue_depth.sh@39 -- # killprocess 1698931 00:19:53.893 21:23:26 -- common/autotest_common.sh@926 -- # '[' -z 1698931 ']' 00:19:53.893 21:23:26 -- common/autotest_common.sh@930 -- # kill -0 1698931 00:19:53.893 21:23:26 -- common/autotest_common.sh@931 -- # uname 00:19:53.893 21:23:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:53.893 21:23:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1698931 00:19:53.893 21:23:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:53.893 21:23:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:53.893 21:23:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1698931' 00:19:53.893 killing process with pid 1698931 00:19:53.893 21:23:26 -- common/autotest_common.sh@945 -- # kill 1698931 00:19:53.893 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.893 00:19:53.893 Latency(us) 00:19:53.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.893 =================================================================================================================== 00:19:53.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.893 21:23:26 -- common/autotest_common.sh@950 -- # wait 1698931 00:19:53.893 21:23:26 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:53.893 21:23:26 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:53.893 21:23:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:53.893 21:23:26 -- nvmf/common.sh@116 -- # sync 00:19:53.893 21:23:26 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:53.893 21:23:26 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:53.893 21:23:26 -- nvmf/common.sh@119 -- # set +e 00:19:53.893 21:23:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:53.893 21:23:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:53.893 rmmod nvme_rdma 00:19:53.893 rmmod nvme_fabrics 00:19:53.893 21:23:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:53.893 21:23:26 -- nvmf/common.sh@123 -- # set -e 00:19:53.893 21:23:26 -- nvmf/common.sh@124 -- # return 0 00:19:53.893 21:23:26 -- nvmf/common.sh@477 -- # '[' -n 1698664 ']' 00:19:53.893 21:23:26 -- nvmf/common.sh@478 -- # killprocess 1698664 00:19:53.893 21:23:26 -- common/autotest_common.sh@926 -- # '[' -z 1698664 ']' 00:19:53.893 21:23:26 -- common/autotest_common.sh@930 -- # kill -0 1698664 00:19:53.893 21:23:26 -- common/autotest_common.sh@931 -- # uname 00:19:53.893 21:23:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:53.893 21:23:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1698664 00:19:53.893 21:23:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:53.893 21:23:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:53.893 21:23:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1698664' 00:19:53.893 killing process with pid 1698664 00:19:53.893 21:23:26 -- common/autotest_common.sh@945 -- # kill 1698664 00:19:53.893 21:23:26 -- common/autotest_common.sh@950 -- # wait 1698664 00:19:53.893 21:23:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:53.893 21:23:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:53.893 00:19:53.893 real 0m20.908s 00:19:53.893 user 0m26.177s 00:19:53.893 sys 0m7.027s 00:19:53.893 21:23:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.893 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:53.893 ************************************ 00:19:53.893 END TEST nvmf_queue_depth 00:19:53.893 ************************************ 00:19:53.893 21:23:27 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:53.893 21:23:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:53.893 21:23:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.893 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:53.893 ************************************ 00:19:53.893 START TEST nvmf_multipath 00:19:53.893 ************************************ 00:19:53.893 21:23:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:53.893 * Looking for test storage... 00:19:53.893 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:53.893 21:23:27 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.893 21:23:27 -- nvmf/common.sh@7 -- # uname -s 00:19:53.893 21:23:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.893 21:23:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.893 21:23:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.893 21:23:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.893 21:23:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.893 21:23:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.893 21:23:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.893 21:23:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.893 21:23:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.893 21:23:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.893 21:23:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:53.893 21:23:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:53.893 21:23:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.893 21:23:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.893 21:23:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.893 21:23:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:53.893 21:23:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.893 21:23:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.893 21:23:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.893 21:23:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.893 21:23:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.893 21:23:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.893 21:23:27 -- paths/export.sh@5 -- # export PATH 00:19:53.893 21:23:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.893 21:23:27 -- nvmf/common.sh@46 -- # : 0 00:19:53.893 21:23:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:53.893 21:23:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:53.893 21:23:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:53.893 21:23:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.894 21:23:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.894 21:23:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:53.894 21:23:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:53.894 21:23:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:53.894 21:23:27 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:53.894 21:23:27 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:53.894 21:23:27 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:53.894 21:23:27 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:53.894 21:23:27 -- target/multipath.sh@43 -- # nvmftestinit 00:19:53.894 21:23:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:53.894 21:23:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.894 21:23:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:53.894 21:23:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:53.894 21:23:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:53.894 21:23:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.894 21:23:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.894 21:23:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.894 21:23:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:53.894 21:23:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:53.894 21:23:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:53.894 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:20:02.017 21:23:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:02.017 21:23:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:02.017 21:23:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:02.017 21:23:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:02.017 21:23:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:02.017 21:23:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:02.017 21:23:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:02.017 21:23:35 -- nvmf/common.sh@294 -- # net_devs=() 00:20:02.017 21:23:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:02.017 21:23:35 -- nvmf/common.sh@295 -- # e810=() 00:20:02.017 21:23:35 -- nvmf/common.sh@295 -- # local -ga e810 00:20:02.017 21:23:35 -- nvmf/common.sh@296 -- # x722=() 00:20:02.017 21:23:35 -- nvmf/common.sh@296 -- # local -ga x722 00:20:02.017 21:23:35 -- nvmf/common.sh@297 -- # mlx=() 00:20:02.017 21:23:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:02.017 21:23:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.017 21:23:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:02.017 21:23:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:02.017 21:23:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:02.017 21:23:35 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:02.017 21:23:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:02.017 21:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:02.017 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:02.017 21:23:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:02.017 21:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:02.017 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:02.017 21:23:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:02.017 21:23:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:02.017 21:23:35 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.017 21:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:02.017 21:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.017 21:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:02.017 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:02.017 21:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.017 21:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.017 21:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:02.017 21:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.017 21:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:02.017 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:02.017 21:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.017 21:23:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:02.017 21:23:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:02.017 21:23:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:02.017 21:23:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:02.017 21:23:35 -- nvmf/common.sh@57 -- # uname 00:20:02.017 21:23:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:02.017 21:23:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:02.017 21:23:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:02.017 21:23:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:02.017 21:23:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:02.017 21:23:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:02.017 21:23:35 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:02.017 21:23:35 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:02.017 21:23:35 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:02.017 21:23:35 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:02.017 21:23:35 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:02.017 21:23:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:02.017 21:23:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:02.017 21:23:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:02.017 21:23:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:02.017 21:23:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:02.017 21:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:02.017 21:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:02.017 21:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.017 21:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:02.017 21:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:02.017 21:23:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:02.017 21:23:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:02.017 21:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:02.017 21:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:02.017 21:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:02.017 21:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:02.017 21:23:35 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:02.017 21:23:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:02.017 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:02.017 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:02.017 altname enp217s0f0np0 00:20:02.017 altname ens818f0np0 00:20:02.017 inet 192.168.100.8/24 scope global mlx_0_0 00:20:02.017 valid_lft forever preferred_lft forever 00:20:02.017 21:23:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:02.017 21:23:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:02.017 21:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:02.017 21:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:02.017 21:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:02.017 21:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:02.017 21:23:35 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:02.017 21:23:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:02.017 21:23:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:02.017 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:02.017 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:02.017 altname enp217s0f1np1 00:20:02.017 altname ens818f1np1 00:20:02.017 inet 192.168.100.9/24 scope global mlx_0_1 00:20:02.017 valid_lft forever preferred_lft forever 00:20:02.017 21:23:35 -- nvmf/common.sh@410 -- # return 0 00:20:02.018 21:23:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:02.018 21:23:35 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:02.018 21:23:35 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:02.018 21:23:35 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:02.018 21:23:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:02.018 21:23:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:02.018 21:23:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:02.018 21:23:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:02.018 21:23:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:02.018 21:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:02.018 21:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.018 21:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:02.018 21:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:02.018 21:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:02.018 21:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:02.018 21:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.018 21:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:02.018 21:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.018 21:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:02.018 21:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:02.018 21:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:02.018 21:23:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:02.018 21:23:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:02.018 21:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:02.018 21:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:02.018 21:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:02.018 21:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:02.018 21:23:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:02.018 21:23:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:02.018 21:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:02.018 21:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:02.018 21:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:02.018 21:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:02.018 21:23:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:02.018 192.168.100.9' 00:20:02.018 21:23:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:02.018 192.168.100.9' 00:20:02.018 21:23:35 -- nvmf/common.sh@445 -- # head -n 1 00:20:02.018 21:23:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:02.018 21:23:35 -- nvmf/common.sh@446 -- # head -n 1 00:20:02.018 21:23:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:02.018 192.168.100.9' 00:20:02.018 21:23:35 -- nvmf/common.sh@446 -- # tail -n +2 00:20:02.018 21:23:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:02.018 21:23:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:02.018 21:23:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:02.018 21:23:35 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:20:02.018 21:23:35 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:20:02.018 21:23:35 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:20:02.018 run this test only with TCP transport for now 00:20:02.018 21:23:35 -- target/multipath.sh@53 -- # nvmftestfini 00:20:02.018 21:23:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:02.018 21:23:35 -- nvmf/common.sh@116 -- # sync 00:20:02.018 21:23:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@119 -- # set +e 00:20:02.018 21:23:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:02.018 21:23:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:02.018 rmmod nvme_rdma 00:20:02.018 rmmod nvme_fabrics 00:20:02.018 21:23:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:02.018 21:23:35 -- nvmf/common.sh@123 -- # set -e 00:20:02.018 21:23:35 -- nvmf/common.sh@124 -- # return 0 00:20:02.018 21:23:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:02.018 21:23:35 -- target/multipath.sh@54 -- # exit 0 00:20:02.018 21:23:35 -- target/multipath.sh@1 -- # nvmftestfini 00:20:02.018 21:23:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:02.018 21:23:35 -- nvmf/common.sh@116 -- # sync 00:20:02.018 21:23:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@119 -- # set +e 00:20:02.018 21:23:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:02.018 21:23:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:02.018 21:23:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:02.018 21:23:35 -- nvmf/common.sh@123 -- # set -e 00:20:02.018 21:23:35 -- nvmf/common.sh@124 -- # return 0 00:20:02.018 21:23:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:02.018 00:20:02.018 real 0m8.465s 00:20:02.018 user 0m2.344s 00:20:02.018 sys 0m6.362s 00:20:02.018 21:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.018 21:23:35 -- common/autotest_common.sh@10 -- # set +x 00:20:02.018 ************************************ 00:20:02.018 END TEST nvmf_multipath 00:20:02.018 ************************************ 00:20:02.018 21:23:35 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:20:02.018 21:23:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:02.018 21:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.018 21:23:35 -- common/autotest_common.sh@10 -- # set +x 00:20:02.018 ************************************ 00:20:02.018 START TEST nvmf_zcopy 00:20:02.018 ************************************ 00:20:02.018 21:23:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:20:02.018 * Looking for test storage... 00:20:02.018 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:02.018 21:23:35 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.018 21:23:35 -- nvmf/common.sh@7 -- # uname -s 00:20:02.018 21:23:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.018 21:23:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.018 21:23:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.018 21:23:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.018 21:23:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.018 21:23:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.018 21:23:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.018 21:23:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.018 21:23:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.018 21:23:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.018 21:23:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:02.018 21:23:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:02.018 21:23:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.018 21:23:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.018 21:23:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.018 21:23:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:02.018 21:23:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.018 21:23:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.018 21:23:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.018 21:23:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.018 21:23:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.018 21:23:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.018 21:23:35 -- paths/export.sh@5 -- # export PATH 00:20:02.018 21:23:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.018 21:23:35 -- nvmf/common.sh@46 -- # : 0 00:20:02.018 21:23:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:02.018 21:23:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:02.018 21:23:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.018 21:23:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.018 21:23:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:02.018 21:23:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:02.018 21:23:35 -- target/zcopy.sh@12 -- # nvmftestinit 00:20:02.018 21:23:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:02.019 21:23:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.019 21:23:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:02.019 21:23:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:02.019 21:23:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:02.019 21:23:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.019 21:23:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.019 21:23:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.019 21:23:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:02.019 21:23:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:02.019 21:23:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:02.019 21:23:35 -- common/autotest_common.sh@10 -- # set +x 00:20:10.141 21:23:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:10.141 21:23:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:10.141 21:23:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:10.141 21:23:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:10.141 21:23:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:10.141 21:23:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:10.141 21:23:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:10.141 21:23:43 -- nvmf/common.sh@294 -- # net_devs=() 00:20:10.141 21:23:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:10.141 21:23:43 -- nvmf/common.sh@295 -- # e810=() 00:20:10.141 21:23:43 -- nvmf/common.sh@295 -- # local -ga e810 00:20:10.141 21:23:43 -- nvmf/common.sh@296 -- # x722=() 00:20:10.141 21:23:43 -- nvmf/common.sh@296 -- # local -ga x722 00:20:10.141 21:23:43 -- nvmf/common.sh@297 -- # mlx=() 00:20:10.141 21:23:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:10.141 21:23:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.141 21:23:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:10.141 21:23:43 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:10.141 21:23:43 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:10.141 21:23:43 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:10.141 21:23:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:10.141 21:23:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:10.141 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:10.141 21:23:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.141 21:23:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:10.141 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:10.141 21:23:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.141 21:23:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:10.141 21:23:43 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.141 21:23:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:10.141 21:23:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.141 21:23:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:10.141 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:10.141 21:23:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.141 21:23:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.141 21:23:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:10.141 21:23:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.141 21:23:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:10.141 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:10.141 21:23:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.141 21:23:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:10.141 21:23:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:10.141 21:23:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:10.141 21:23:43 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:10.141 21:23:43 -- nvmf/common.sh@57 -- # uname 00:20:10.141 21:23:43 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:10.141 21:23:43 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:10.141 21:23:43 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:10.141 21:23:43 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:10.141 21:23:43 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:10.141 21:23:43 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:10.141 21:23:43 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:10.141 21:23:43 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:10.141 21:23:43 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:10.141 21:23:43 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:10.141 21:23:43 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:10.141 21:23:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.141 21:23:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:10.141 21:23:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:10.141 21:23:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.141 21:23:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:10.141 21:23:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:10.141 21:23:43 -- nvmf/common.sh@104 -- # continue 2 00:20:10.141 21:23:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:10.141 21:23:43 -- nvmf/common.sh@104 -- # continue 2 00:20:10.141 21:23:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:10.141 21:23:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:10.141 21:23:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:10.141 21:23:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:10.141 21:23:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:10.141 21:23:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:10.141 21:23:43 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:10.141 21:23:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:10.141 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.141 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:10.141 altname enp217s0f0np0 00:20:10.141 altname ens818f0np0 00:20:10.141 inet 192.168.100.8/24 scope global mlx_0_0 00:20:10.141 valid_lft forever preferred_lft forever 00:20:10.141 21:23:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:10.141 21:23:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:10.141 21:23:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:10.141 21:23:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:10.141 21:23:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:10.141 21:23:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:10.141 21:23:43 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:10.141 21:23:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:10.141 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.141 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:10.141 altname enp217s0f1np1 00:20:10.141 altname ens818f1np1 00:20:10.141 inet 192.168.100.9/24 scope global mlx_0_1 00:20:10.141 valid_lft forever preferred_lft forever 00:20:10.141 21:23:43 -- nvmf/common.sh@410 -- # return 0 00:20:10.141 21:23:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:10.141 21:23:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:10.141 21:23:43 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:10.141 21:23:43 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:10.141 21:23:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.141 21:23:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:10.141 21:23:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:10.141 21:23:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.141 21:23:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:10.141 21:23:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.141 21:23:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:10.141 21:23:43 -- nvmf/common.sh@104 -- # continue 2 00:20:10.141 21:23:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.141 21:23:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.142 21:23:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.142 21:23:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.142 21:23:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:10.142 21:23:43 -- nvmf/common.sh@104 -- # continue 2 00:20:10.142 21:23:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:10.142 21:23:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:10.142 21:23:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:10.142 21:23:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:10.142 21:23:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:10.142 21:23:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:10.142 21:23:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:10.142 21:23:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:10.142 21:23:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:10.142 21:23:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:10.142 21:23:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:10.142 21:23:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:10.142 21:23:43 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:10.142 192.168.100.9' 00:20:10.142 21:23:43 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:10.142 192.168.100.9' 00:20:10.142 21:23:43 -- nvmf/common.sh@445 -- # head -n 1 00:20:10.142 21:23:43 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:10.142 21:23:43 -- nvmf/common.sh@446 -- # tail -n +2 00:20:10.142 21:23:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:10.142 192.168.100.9' 00:20:10.142 21:23:43 -- nvmf/common.sh@446 -- # head -n 1 00:20:10.142 21:23:43 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:10.142 21:23:43 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:10.142 21:23:43 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:10.142 21:23:43 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:10.142 21:23:43 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:10.142 21:23:43 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:10.142 21:23:43 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:10.142 21:23:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:10.142 21:23:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:10.142 21:23:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 21:23:43 -- nvmf/common.sh@469 -- # nvmfpid=1708775 00:20:10.142 21:23:43 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:10.142 21:23:43 -- nvmf/common.sh@470 -- # waitforlisten 1708775 00:20:10.142 21:23:43 -- common/autotest_common.sh@819 -- # '[' -z 1708775 ']' 00:20:10.142 21:23:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.142 21:23:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:10.142 21:23:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.142 21:23:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:10.142 21:23:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 [2024-07-26 21:23:43.989179] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:10.142 [2024-07-26 21:23:43.989233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.142 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.142 [2024-07-26 21:23:44.077047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.142 [2024-07-26 21:23:44.113992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:10.142 [2024-07-26 21:23:44.114097] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.142 [2024-07-26 21:23:44.114107] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.142 [2024-07-26 21:23:44.114116] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.142 [2024-07-26 21:23:44.114139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.142 21:23:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:10.142 21:23:44 -- common/autotest_common.sh@852 -- # return 0 00:20:10.142 21:23:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:10.142 21:23:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:10.142 21:23:44 -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 21:23:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.142 21:23:44 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:20:10.142 21:23:44 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:20:10.142 Unsupported transport: rdma 00:20:10.142 21:23:44 -- target/zcopy.sh@17 -- # exit 0 00:20:10.142 21:23:44 -- target/zcopy.sh@1 -- # process_shm --id 0 00:20:10.142 21:23:44 -- common/autotest_common.sh@796 -- # type=--id 00:20:10.142 21:23:44 -- common/autotest_common.sh@797 -- # id=0 00:20:10.142 21:23:44 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:10.142 21:23:44 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:10.142 21:23:44 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:10.142 21:23:44 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:10.142 21:23:44 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:10.142 21:23:44 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:10.142 nvmf_trace.0 00:20:10.142 21:23:44 -- common/autotest_common.sh@811 -- # return 0 00:20:10.142 21:23:44 -- target/zcopy.sh@1 -- # nvmftestfini 00:20:10.142 21:23:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:10.142 21:23:44 -- nvmf/common.sh@116 -- # sync 00:20:10.142 21:23:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:10.142 21:23:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:10.142 21:23:44 -- nvmf/common.sh@119 -- # set +e 00:20:10.142 21:23:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:10.142 21:23:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:10.142 rmmod nvme_rdma 00:20:10.142 rmmod nvme_fabrics 00:20:10.142 21:23:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:10.142 21:23:44 -- nvmf/common.sh@123 -- # set -e 00:20:10.142 21:23:44 -- nvmf/common.sh@124 -- # return 0 00:20:10.142 21:23:44 -- nvmf/common.sh@477 -- # '[' -n 1708775 ']' 00:20:10.142 21:23:44 -- nvmf/common.sh@478 -- # killprocess 1708775 00:20:10.142 21:23:44 -- common/autotest_common.sh@926 -- # '[' -z 1708775 ']' 00:20:10.142 21:23:44 -- common/autotest_common.sh@930 -- # kill -0 1708775 00:20:10.142 21:23:44 -- common/autotest_common.sh@931 -- # uname 00:20:10.142 21:23:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.142 21:23:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1708775 00:20:10.142 21:23:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:10.142 21:23:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:10.142 21:23:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1708775' 00:20:10.142 killing process with pid 1708775 00:20:10.142 21:23:44 -- common/autotest_common.sh@945 -- # kill 1708775 00:20:10.142 21:23:44 -- common/autotest_common.sh@950 -- # wait 1708775 00:20:10.402 21:23:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.402 21:23:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:10.402 00:20:10.402 real 0m9.355s 00:20:10.402 user 0m3.657s 00:20:10.402 sys 0m6.415s 00:20:10.402 21:23:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.402 21:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:10.402 ************************************ 00:20:10.402 END TEST nvmf_zcopy 00:20:10.402 ************************************ 00:20:10.402 21:23:45 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:20:10.402 21:23:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:10.402 21:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:10.402 21:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:10.402 ************************************ 00:20:10.402 START TEST nvmf_nmic 00:20:10.402 ************************************ 00:20:10.402 21:23:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:20:10.402 * Looking for test storage... 00:20:10.402 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:10.402 21:23:45 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.402 21:23:45 -- nvmf/common.sh@7 -- # uname -s 00:20:10.402 21:23:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.402 21:23:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.402 21:23:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.402 21:23:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.402 21:23:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.402 21:23:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.402 21:23:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.402 21:23:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.402 21:23:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.402 21:23:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.402 21:23:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:10.402 21:23:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:10.402 21:23:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.402 21:23:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.402 21:23:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.402 21:23:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:10.402 21:23:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.402 21:23:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.402 21:23:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.402 21:23:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.402 21:23:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.402 21:23:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.402 21:23:45 -- paths/export.sh@5 -- # export PATH 00:20:10.402 21:23:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.402 21:23:45 -- nvmf/common.sh@46 -- # : 0 00:20:10.402 21:23:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:10.402 21:23:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:10.402 21:23:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:10.402 21:23:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.402 21:23:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.402 21:23:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:10.402 21:23:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:10.402 21:23:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:10.402 21:23:45 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:10.402 21:23:45 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:10.402 21:23:45 -- target/nmic.sh@14 -- # nvmftestinit 00:20:10.402 21:23:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:10.402 21:23:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.402 21:23:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:10.402 21:23:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:10.402 21:23:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:10.402 21:23:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.402 21:23:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.402 21:23:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.662 21:23:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:10.662 21:23:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:10.662 21:23:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:10.662 21:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:18.783 21:23:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:18.783 21:23:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:18.783 21:23:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:18.783 21:23:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:18.783 21:23:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:18.783 21:23:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:18.783 21:23:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:18.783 21:23:53 -- nvmf/common.sh@294 -- # net_devs=() 00:20:18.783 21:23:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:18.783 21:23:53 -- nvmf/common.sh@295 -- # e810=() 00:20:18.783 21:23:53 -- nvmf/common.sh@295 -- # local -ga e810 00:20:18.783 21:23:53 -- nvmf/common.sh@296 -- # x722=() 00:20:18.783 21:23:53 -- nvmf/common.sh@296 -- # local -ga x722 00:20:18.783 21:23:53 -- nvmf/common.sh@297 -- # mlx=() 00:20:18.783 21:23:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:18.783 21:23:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.783 21:23:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:18.783 21:23:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:18.783 21:23:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:18.783 21:23:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:18.783 21:23:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:18.783 21:23:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:18.783 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:18.783 21:23:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:18.783 21:23:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:18.783 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:18.783 21:23:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:18.783 21:23:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:18.783 21:23:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.783 21:23:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:18.783 21:23:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.783 21:23:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:18.783 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:18.783 21:23:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.783 21:23:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.783 21:23:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:18.783 21:23:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.783 21:23:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:18.783 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:18.783 21:23:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.783 21:23:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:18.783 21:23:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:18.783 21:23:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:18.783 21:23:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:18.783 21:23:53 -- nvmf/common.sh@57 -- # uname 00:20:18.783 21:23:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:18.783 21:23:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:18.783 21:23:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:18.783 21:23:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:18.783 21:23:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:18.783 21:23:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:18.783 21:23:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:18.783 21:23:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:18.783 21:23:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:18.783 21:23:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:18.783 21:23:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:18.783 21:23:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:18.783 21:23:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:18.783 21:23:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:18.783 21:23:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:18.783 21:23:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:18.783 21:23:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:18.783 21:23:53 -- nvmf/common.sh@104 -- # continue 2 00:20:18.783 21:23:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.783 21:23:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:18.783 21:23:53 -- nvmf/common.sh@104 -- # continue 2 00:20:18.783 21:23:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:18.783 21:23:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:18.783 21:23:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:18.783 21:23:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:18.783 21:23:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:18.783 21:23:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:18.783 21:23:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:18.783 21:23:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:18.783 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:18.783 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:18.783 altname enp217s0f0np0 00:20:18.783 altname ens818f0np0 00:20:18.783 inet 192.168.100.8/24 scope global mlx_0_0 00:20:18.783 valid_lft forever preferred_lft forever 00:20:18.783 21:23:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:18.783 21:23:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:18.783 21:23:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:18.783 21:23:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:18.783 21:23:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:18.783 21:23:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:18.783 21:23:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:18.783 21:23:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:18.783 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:18.783 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:18.783 altname enp217s0f1np1 00:20:18.783 altname ens818f1np1 00:20:18.783 inet 192.168.100.9/24 scope global mlx_0_1 00:20:18.783 valid_lft forever preferred_lft forever 00:20:18.783 21:23:53 -- nvmf/common.sh@410 -- # return 0 00:20:18.783 21:23:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:18.783 21:23:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:18.783 21:23:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:18.783 21:23:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:18.783 21:23:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:18.783 21:23:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:18.783 21:23:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:18.783 21:23:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:18.783 21:23:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:19.041 21:23:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:19.041 21:23:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:19.041 21:23:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.041 21:23:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:19.041 21:23:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:19.041 21:23:53 -- nvmf/common.sh@104 -- # continue 2 00:20:19.041 21:23:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:19.041 21:23:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.041 21:23:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:19.041 21:23:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.042 21:23:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:19.042 21:23:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:19.042 21:23:53 -- nvmf/common.sh@104 -- # continue 2 00:20:19.042 21:23:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:19.042 21:23:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:19.042 21:23:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:19.042 21:23:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:19.042 21:23:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:19.042 21:23:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:19.042 21:23:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:19.042 21:23:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:19.042 21:23:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:19.042 21:23:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:19.042 21:23:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:19.042 21:23:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:19.042 21:23:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:19.042 192.168.100.9' 00:20:19.042 21:23:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:19.042 192.168.100.9' 00:20:19.042 21:23:53 -- nvmf/common.sh@445 -- # head -n 1 00:20:19.042 21:23:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:19.042 21:23:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:19.042 192.168.100.9' 00:20:19.042 21:23:53 -- nvmf/common.sh@446 -- # tail -n +2 00:20:19.042 21:23:53 -- nvmf/common.sh@446 -- # head -n 1 00:20:19.042 21:23:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:19.042 21:23:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:19.042 21:23:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:19.042 21:23:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:19.042 21:23:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:19.042 21:23:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:19.042 21:23:53 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:19.042 21:23:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:19.042 21:23:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:19.042 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:20:19.042 21:23:53 -- nvmf/common.sh@469 -- # nvmfpid=1713052 00:20:19.042 21:23:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:19.042 21:23:53 -- nvmf/common.sh@470 -- # waitforlisten 1713052 00:20:19.042 21:23:53 -- common/autotest_common.sh@819 -- # '[' -z 1713052 ']' 00:20:19.042 21:23:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.042 21:23:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.042 21:23:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.042 21:23:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.042 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:20:19.042 [2024-07-26 21:23:53.808784] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:19.042 [2024-07-26 21:23:53.808843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.042 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.042 [2024-07-26 21:23:53.897519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.299 [2024-07-26 21:23:53.938653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.299 [2024-07-26 21:23:53.938756] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.299 [2024-07-26 21:23:53.938767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.299 [2024-07-26 21:23:53.938777] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.299 [2024-07-26 21:23:53.938821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.299 [2024-07-26 21:23:53.938918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.299 [2024-07-26 21:23:53.939003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.299 [2024-07-26 21:23:53.939004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.866 21:23:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.866 21:23:54 -- common/autotest_common.sh@852 -- # return 0 00:20:19.866 21:23:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.866 21:23:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:19.866 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:19.866 21:23:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.866 21:23:54 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:19.866 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.866 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:19.866 [2024-07-26 21:23:54.685567] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x78e060/0x792550) succeed. 00:20:19.866 [2024-07-26 21:23:54.695709] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x78f650/0x7d3be0) succeed. 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 Malloc0 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 [2024-07-26 21:23:54.861490] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:20.125 test case1: single bdev can't be used in multiple subsystems 00:20:20.125 21:23:54 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@28 -- # nmic_status=0 00:20:20.125 21:23:54 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 [2024-07-26 21:23:54.889279] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:20.125 [2024-07-26 21:23:54.889300] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:20.125 [2024-07-26 21:23:54.889309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.125 request: 00:20:20.125 { 00:20:20.125 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.125 "namespace": { 00:20:20.125 "bdev_name": "Malloc0" 00:20:20.125 }, 00:20:20.125 "method": "nvmf_subsystem_add_ns", 00:20:20.125 "req_id": 1 00:20:20.125 } 00:20:20.125 Got JSON-RPC error response 00:20:20.125 response: 00:20:20.125 { 00:20:20.125 "code": -32602, 00:20:20.125 "message": "Invalid parameters" 00:20:20.125 } 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@29 -- # nmic_status=1 00:20:20.125 21:23:54 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:20.125 21:23:54 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:20.125 Adding namespace failed - expected result. 00:20:20.125 21:23:54 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:20.125 test case2: host connect to nvmf target in multiple paths 00:20:20.125 21:23:54 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:20.125 21:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.125 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.125 [2024-07-26 21:23:54.901340] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:20.125 21:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.125 21:23:54 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:21.110 21:23:55 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:20:22.045 21:23:56 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:22.045 21:23:56 -- common/autotest_common.sh@1177 -- # local i=0 00:20:22.045 21:23:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:22.045 21:23:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:22.045 21:23:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:24.582 21:23:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:24.582 21:23:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:24.582 21:23:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:24.582 21:23:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:24.582 21:23:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:24.582 21:23:58 -- common/autotest_common.sh@1187 -- # return 0 00:20:24.582 21:23:58 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:24.582 [global] 00:20:24.582 thread=1 00:20:24.582 invalidate=1 00:20:24.582 rw=write 00:20:24.582 time_based=1 00:20:24.582 runtime=1 00:20:24.582 ioengine=libaio 00:20:24.582 direct=1 00:20:24.582 bs=4096 00:20:24.582 iodepth=1 00:20:24.582 norandommap=0 00:20:24.582 numjobs=1 00:20:24.582 00:20:24.582 verify_dump=1 00:20:24.582 verify_backlog=512 00:20:24.582 verify_state_save=0 00:20:24.582 do_verify=1 00:20:24.582 verify=crc32c-intel 00:20:24.582 [job0] 00:20:24.582 filename=/dev/nvme0n1 00:20:24.582 Could not set queue depth (nvme0n1) 00:20:24.582 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:24.582 fio-3.35 00:20:24.582 Starting 1 thread 00:20:25.520 00:20:25.520 job0: (groupid=0, jobs=1): err= 0: pid=1714145: Fri Jul 26 21:24:00 2024 00:20:25.520 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:20:25.520 slat (nsec): min=5208, max=33105, avg=8988.55, stdev=1140.43 00:20:25.520 clat (usec): min=43, max=666, avg=58.86, stdev= 9.61 00:20:25.520 lat (usec): min=56, max=698, avg=67.85, stdev= 9.91 00:20:25.520 clat percentiles (usec): 00:20:25.520 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:20:25.520 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:20:25.520 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 65], 00:20:25.520 | 99.00th=[ 70], 99.50th=[ 72], 99.90th=[ 121], 99.95th=[ 233], 00:20:25.520 | 99.99th=[ 668] 00:20:25.520 write: IOPS=7264, BW=28.4MiB/s (29.8MB/s)(28.4MiB/1001msec); 0 zone resets 00:20:25.520 slat (nsec): min=5658, max=41248, avg=10599.07, stdev=1334.96 00:20:25.520 clat (nsec): min=40631, max=84574, avg=55920.66, stdev=3860.69 00:20:25.520 lat (usec): min=47, max=100, avg=66.52, stdev= 4.22 00:20:25.520 clat percentiles (nsec): 00:20:25.520 | 1.00th=[47360], 5.00th=[49920], 10.00th=[50944], 20.00th=[52480], 00:20:25.520 | 30.00th=[54016], 40.00th=[55040], 50.00th=[56064], 60.00th=[56576], 00:20:25.520 | 70.00th=[57600], 80.00th=[59136], 90.00th=[60672], 95.00th=[62208], 00:20:25.520 | 99.00th=[66048], 99.50th=[67072], 99.90th=[72192], 99.95th=[76288], 00:20:25.520 | 99.99th=[84480] 00:20:25.520 bw ( KiB/s): min=29872, max=29872, per=100.00%, avg=29872.00, stdev= 0.00, samples=1 00:20:25.520 iops : min= 7468, max= 7468, avg=7468.00, stdev= 0.00, samples=1 00:20:25.520 lat (usec) : 50=2.62%, 100=97.30%, 250=0.06%, 500=0.01%, 750=0.01% 00:20:25.520 cpu : usr=8.90%, sys=19.80%, ctx=14440, majf=0, minf=2 00:20:25.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.520 issued rwts: total=7168,7272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:25.520 00:20:25.520 Run status group 0 (all jobs): 00:20:25.520 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:20:25.520 WRITE: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=28.4MiB (29.8MB), run=1001-1001msec 00:20:25.520 00:20:25.520 Disk stats (read/write): 00:20:25.520 nvme0n1: ios=6414/6656, merge=0/0, ticks=337/307, in_queue=644, util=90.48% 00:20:25.520 21:24:00 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:27.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:27.424 21:24:02 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:27.424 21:24:02 -- common/autotest_common.sh@1198 -- # local i=0 00:20:27.424 21:24:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:27.424 21:24:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:27.683 21:24:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:27.683 21:24:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:27.683 21:24:02 -- common/autotest_common.sh@1210 -- # return 0 00:20:27.683 21:24:02 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:27.683 21:24:02 -- target/nmic.sh@53 -- # nvmftestfini 00:20:27.683 21:24:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:27.683 21:24:02 -- nvmf/common.sh@116 -- # sync 00:20:27.683 21:24:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:27.683 21:24:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:27.683 21:24:02 -- nvmf/common.sh@119 -- # set +e 00:20:27.683 21:24:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:27.683 21:24:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:27.683 rmmod nvme_rdma 00:20:27.683 rmmod nvme_fabrics 00:20:27.683 21:24:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:27.683 21:24:02 -- nvmf/common.sh@123 -- # set -e 00:20:27.683 21:24:02 -- nvmf/common.sh@124 -- # return 0 00:20:27.683 21:24:02 -- nvmf/common.sh@477 -- # '[' -n 1713052 ']' 00:20:27.683 21:24:02 -- nvmf/common.sh@478 -- # killprocess 1713052 00:20:27.683 21:24:02 -- common/autotest_common.sh@926 -- # '[' -z 1713052 ']' 00:20:27.683 21:24:02 -- common/autotest_common.sh@930 -- # kill -0 1713052 00:20:27.683 21:24:02 -- common/autotest_common.sh@931 -- # uname 00:20:27.683 21:24:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.683 21:24:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1713052 00:20:27.683 21:24:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:27.683 21:24:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:27.683 21:24:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1713052' 00:20:27.683 killing process with pid 1713052 00:20:27.683 21:24:02 -- common/autotest_common.sh@945 -- # kill 1713052 00:20:27.683 21:24:02 -- common/autotest_common.sh@950 -- # wait 1713052 00:20:27.942 21:24:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:27.942 21:24:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:27.942 00:20:27.942 real 0m17.569s 00:20:27.942 user 0m45.423s 00:20:27.942 sys 0m7.598s 00:20:27.942 21:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.942 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:20:27.942 ************************************ 00:20:27.942 END TEST nvmf_nmic 00:20:27.942 ************************************ 00:20:27.942 21:24:02 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:27.942 21:24:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:27.942 21:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:27.942 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:20:27.942 ************************************ 00:20:27.942 START TEST nvmf_fio_target 00:20:27.942 ************************************ 00:20:27.942 21:24:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:28.202 * Looking for test storage... 00:20:28.202 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:28.202 21:24:02 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.202 21:24:02 -- nvmf/common.sh@7 -- # uname -s 00:20:28.202 21:24:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.202 21:24:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.202 21:24:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.202 21:24:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.202 21:24:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.202 21:24:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.202 21:24:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.202 21:24:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.202 21:24:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.202 21:24:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.202 21:24:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:28.202 21:24:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:28.202 21:24:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.202 21:24:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.202 21:24:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.202 21:24:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:28.202 21:24:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.202 21:24:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.202 21:24:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.202 21:24:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.202 21:24:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.202 21:24:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.202 21:24:02 -- paths/export.sh@5 -- # export PATH 00:20:28.202 21:24:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.202 21:24:02 -- nvmf/common.sh@46 -- # : 0 00:20:28.202 21:24:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:28.202 21:24:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:28.202 21:24:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:28.202 21:24:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.202 21:24:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.202 21:24:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:28.202 21:24:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:28.202 21:24:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:28.202 21:24:02 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:28.202 21:24:02 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:28.202 21:24:02 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:28.202 21:24:02 -- target/fio.sh@16 -- # nvmftestinit 00:20:28.202 21:24:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:28.202 21:24:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.202 21:24:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:28.202 21:24:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:28.202 21:24:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:28.202 21:24:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.202 21:24:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.202 21:24:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.202 21:24:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:28.202 21:24:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:28.202 21:24:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:28.202 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:20:36.324 21:24:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:36.324 21:24:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:36.324 21:24:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:36.324 21:24:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:36.324 21:24:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:36.324 21:24:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:36.324 21:24:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:36.324 21:24:10 -- nvmf/common.sh@294 -- # net_devs=() 00:20:36.324 21:24:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:36.324 21:24:10 -- nvmf/common.sh@295 -- # e810=() 00:20:36.324 21:24:10 -- nvmf/common.sh@295 -- # local -ga e810 00:20:36.324 21:24:10 -- nvmf/common.sh@296 -- # x722=() 00:20:36.324 21:24:10 -- nvmf/common.sh@296 -- # local -ga x722 00:20:36.324 21:24:10 -- nvmf/common.sh@297 -- # mlx=() 00:20:36.324 21:24:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:36.324 21:24:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.324 21:24:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:36.324 21:24:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:36.324 21:24:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:36.324 21:24:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:36.324 21:24:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:36.324 21:24:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:36.325 21:24:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:36.325 21:24:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:36.325 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:36.325 21:24:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:36.325 21:24:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:36.325 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:36.325 21:24:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:36.325 21:24:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:36.325 21:24:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.325 21:24:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:36.325 21:24:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.325 21:24:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:36.325 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:36.325 21:24:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.325 21:24:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.325 21:24:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:36.325 21:24:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.325 21:24:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:36.325 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:36.325 21:24:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.325 21:24:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:36.325 21:24:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:36.325 21:24:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:36.325 21:24:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:36.325 21:24:10 -- nvmf/common.sh@57 -- # uname 00:20:36.325 21:24:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:36.325 21:24:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:36.325 21:24:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:36.325 21:24:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:36.325 21:24:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:36.325 21:24:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:36.325 21:24:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:36.325 21:24:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:36.325 21:24:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:36.325 21:24:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:36.325 21:24:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:36.325 21:24:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:36.325 21:24:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:36.325 21:24:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:36.325 21:24:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:36.325 21:24:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:36.325 21:24:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:36.325 21:24:10 -- nvmf/common.sh@104 -- # continue 2 00:20:36.325 21:24:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.325 21:24:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:36.325 21:24:10 -- nvmf/common.sh@104 -- # continue 2 00:20:36.325 21:24:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:36.325 21:24:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:36.325 21:24:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:36.325 21:24:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:36.325 21:24:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:36.325 21:24:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:36.325 21:24:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:36.325 21:24:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:36.325 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:36.325 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:36.325 altname enp217s0f0np0 00:20:36.325 altname ens818f0np0 00:20:36.325 inet 192.168.100.8/24 scope global mlx_0_0 00:20:36.325 valid_lft forever preferred_lft forever 00:20:36.325 21:24:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:36.325 21:24:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:36.325 21:24:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:36.325 21:24:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:36.325 21:24:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:36.325 21:24:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:36.325 21:24:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:36.325 21:24:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:36.325 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:36.325 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:36.325 altname enp217s0f1np1 00:20:36.325 altname ens818f1np1 00:20:36.325 inet 192.168.100.9/24 scope global mlx_0_1 00:20:36.325 valid_lft forever preferred_lft forever 00:20:36.325 21:24:10 -- nvmf/common.sh@410 -- # return 0 00:20:36.325 21:24:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:36.325 21:24:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:36.325 21:24:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:36.325 21:24:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:36.325 21:24:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:36.325 21:24:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:36.325 21:24:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:36.325 21:24:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:36.325 21:24:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:36.325 21:24:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:36.325 21:24:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:36.325 21:24:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.325 21:24:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:36.325 21:24:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:36.325 21:24:11 -- nvmf/common.sh@104 -- # continue 2 00:20:36.325 21:24:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:36.325 21:24:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.325 21:24:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:36.325 21:24:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.325 21:24:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:36.325 21:24:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:36.325 21:24:11 -- nvmf/common.sh@104 -- # continue 2 00:20:36.325 21:24:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:36.325 21:24:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:36.325 21:24:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:36.325 21:24:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:36.325 21:24:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:36.325 21:24:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:36.325 21:24:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:36.325 21:24:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:36.325 21:24:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:36.325 21:24:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:36.325 21:24:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:36.325 21:24:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:36.325 21:24:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:36.325 192.168.100.9' 00:20:36.325 21:24:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:36.325 192.168.100.9' 00:20:36.325 21:24:11 -- nvmf/common.sh@445 -- # head -n 1 00:20:36.325 21:24:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:36.325 21:24:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:36.325 192.168.100.9' 00:20:36.325 21:24:11 -- nvmf/common.sh@446 -- # head -n 1 00:20:36.325 21:24:11 -- nvmf/common.sh@446 -- # tail -n +2 00:20:36.325 21:24:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:36.325 21:24:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:36.325 21:24:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:36.325 21:24:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:36.325 21:24:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:36.325 21:24:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:36.325 21:24:11 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:36.325 21:24:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:36.326 21:24:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:36.326 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:20:36.326 21:24:11 -- nvmf/common.sh@469 -- # nvmfpid=1719174 00:20:36.326 21:24:11 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:36.326 21:24:11 -- nvmf/common.sh@470 -- # waitforlisten 1719174 00:20:36.326 21:24:11 -- common/autotest_common.sh@819 -- # '[' -z 1719174 ']' 00:20:36.326 21:24:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.326 21:24:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.326 21:24:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.326 21:24:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.326 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:20:36.326 [2024-07-26 21:24:11.140839] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:36.326 [2024-07-26 21:24:11.140896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.326 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.585 [2024-07-26 21:24:11.227523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.585 [2024-07-26 21:24:11.266937] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:36.585 [2024-07-26 21:24:11.267041] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.585 [2024-07-26 21:24:11.267051] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.585 [2024-07-26 21:24:11.267060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.585 [2024-07-26 21:24:11.267105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.585 [2024-07-26 21:24:11.267199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.585 [2024-07-26 21:24:11.267296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.585 [2024-07-26 21:24:11.267298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.153 21:24:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.154 21:24:11 -- common/autotest_common.sh@852 -- # return 0 00:20:37.154 21:24:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:37.154 21:24:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:37.154 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:20:37.154 21:24:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.154 21:24:11 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:37.413 [2024-07-26 21:24:12.167644] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x924060/0x928550) succeed. 00:20:37.413 [2024-07-26 21:24:12.179456] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x925650/0x969be0) succeed. 00:20:37.672 21:24:12 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:37.672 21:24:12 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:37.672 21:24:12 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:37.930 21:24:12 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:37.930 21:24:12 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.189 21:24:12 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:38.189 21:24:12 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.456 21:24:13 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:38.456 21:24:13 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:38.456 21:24:13 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.721 21:24:13 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:38.721 21:24:13 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.980 21:24:13 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:38.980 21:24:13 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.980 21:24:13 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:38.980 21:24:13 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:39.239 21:24:13 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:39.498 21:24:14 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:39.498 21:24:14 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.498 21:24:14 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:39.498 21:24:14 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:39.757 21:24:14 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:40.017 [2024-07-26 21:24:14.687674] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:40.017 21:24:14 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:40.017 21:24:14 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:40.276 21:24:15 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:41.213 21:24:16 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:41.213 21:24:16 -- common/autotest_common.sh@1177 -- # local i=0 00:20:41.213 21:24:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:41.213 21:24:16 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:20:41.213 21:24:16 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:20:41.213 21:24:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:43.190 21:24:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:43.190 21:24:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:43.190 21:24:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:43.190 21:24:18 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:20:43.190 21:24:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:43.190 21:24:18 -- common/autotest_common.sh@1187 -- # return 0 00:20:43.190 21:24:18 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:43.449 [global] 00:20:43.449 thread=1 00:20:43.449 invalidate=1 00:20:43.449 rw=write 00:20:43.449 time_based=1 00:20:43.449 runtime=1 00:20:43.449 ioengine=libaio 00:20:43.449 direct=1 00:20:43.449 bs=4096 00:20:43.449 iodepth=1 00:20:43.449 norandommap=0 00:20:43.449 numjobs=1 00:20:43.449 00:20:43.449 verify_dump=1 00:20:43.449 verify_backlog=512 00:20:43.449 verify_state_save=0 00:20:43.449 do_verify=1 00:20:43.449 verify=crc32c-intel 00:20:43.449 [job0] 00:20:43.449 filename=/dev/nvme0n1 00:20:43.449 [job1] 00:20:43.449 filename=/dev/nvme0n2 00:20:43.449 [job2] 00:20:43.449 filename=/dev/nvme0n3 00:20:43.449 [job3] 00:20:43.449 filename=/dev/nvme0n4 00:20:43.449 Could not set queue depth (nvme0n1) 00:20:43.449 Could not set queue depth (nvme0n2) 00:20:43.449 Could not set queue depth (nvme0n3) 00:20:43.449 Could not set queue depth (nvme0n4) 00:20:43.707 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:43.707 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:43.707 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:43.707 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:43.707 fio-3.35 00:20:43.707 Starting 4 threads 00:20:45.087 00:20:45.087 job0: (groupid=0, jobs=1): err= 0: pid=1720727: Fri Jul 26 21:24:19 2024 00:20:45.087 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:45.087 slat (nsec): min=8236, max=32299, avg=8935.75, stdev=976.90 00:20:45.087 clat (usec): min=51, max=237, avg=109.22, stdev=37.58 00:20:45.087 lat (usec): min=75, max=247, avg=118.15, stdev=37.61 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 73], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 82], 00:20:45.087 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 92], 00:20:45.087 | 70.00th=[ 131], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 178], 00:20:45.087 | 99.00th=[ 202], 99.50th=[ 217], 99.90th=[ 235], 99.95th=[ 235], 00:20:45.087 | 99.99th=[ 237] 00:20:45.087 write: IOPS=4381, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1001msec); 0 zone resets 00:20:45.087 slat (nsec): min=6599, max=34328, avg=10810.74, stdev=1057.73 00:20:45.087 clat (usec): min=63, max=219, avg=102.77, stdev=32.93 00:20:45.087 lat (usec): min=73, max=230, avg=113.58, stdev=33.17 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 79], 00:20:45.087 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 89], 00:20:45.087 | 70.00th=[ 122], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:20:45.087 | 99.00th=[ 186], 99.50th=[ 198], 99.90th=[ 208], 99.95th=[ 212], 00:20:45.087 | 99.99th=[ 219] 00:20:45.087 bw ( KiB/s): min=13192, max=13192, per=20.27%, avg=13192.00, stdev= 0.00, samples=1 00:20:45.087 iops : min= 3298, max= 3298, avg=3298.00, stdev= 0.00, samples=1 00:20:45.087 lat (usec) : 100=66.53%, 250=33.47% 00:20:45.087 cpu : usr=6.00%, sys=11.40%, ctx=8482, majf=0, minf=1 00:20:45.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 issued rwts: total=4096,4386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:45.087 job1: (groupid=0, jobs=1): err= 0: pid=1720728: Fri Jul 26 21:24:19 2024 00:20:45.087 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:20:45.087 slat (nsec): min=3196, max=34572, avg=6636.34, stdev=3217.66 00:20:45.087 clat (usec): min=57, max=205, avg=93.58, stdev=32.98 00:20:45.087 lat (usec): min=61, max=225, avg=100.22, stdev=35.25 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 68], 20.00th=[ 70], 00:20:45.087 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 80], 00:20:45.087 | 70.00th=[ 110], 80.00th=[ 133], 90.00th=[ 145], 95.00th=[ 159], 00:20:45.087 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 198], 99.95th=[ 202], 00:20:45.087 | 99.99th=[ 206] 00:20:45.087 write: IOPS=5087, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1001msec); 0 zone resets 00:20:45.087 slat (nsec): min=4083, max=41084, avg=8411.91, stdev=4439.87 00:20:45.087 clat (usec): min=50, max=206, avg=94.39, stdev=35.35 00:20:45.087 lat (usec): min=58, max=219, avg=102.81, stdev=37.80 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:20:45.087 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 83], 00:20:45.087 | 70.00th=[ 118], 80.00th=[ 135], 90.00th=[ 149], 95.00th=[ 161], 00:20:45.087 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 196], 99.95th=[ 196], 00:20:45.087 | 99.99th=[ 206] 00:20:45.087 bw ( KiB/s): min=26000, max=26000, per=39.95%, avg=26000.00, stdev= 0.00, samples=1 00:20:45.087 iops : min= 6500, max= 6500, avg=6500.00, stdev= 0.00, samples=1 00:20:45.087 lat (usec) : 100=66.55%, 250=33.45% 00:20:45.087 cpu : usr=4.80%, sys=9.30%, ctx=9701, majf=0, minf=2 00:20:45.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 issued rwts: total=4608,5093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:45.087 job2: (groupid=0, jobs=1): err= 0: pid=1720731: Fri Jul 26 21:24:19 2024 00:20:45.087 read: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1001msec) 00:20:45.087 slat (nsec): min=8381, max=21496, avg=9293.58, stdev=898.98 00:20:45.087 clat (usec): min=70, max=245, avg=131.84, stdev=35.79 00:20:45.087 lat (usec): min=79, max=254, avg=141.13, stdev=35.86 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 90], 00:20:45.087 | 30.00th=[ 96], 40.00th=[ 129], 50.00th=[ 141], 60.00th=[ 149], 00:20:45.087 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 182], 00:20:45.087 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 241], 99.95th=[ 241], 00:20:45.087 | 99.99th=[ 245] 00:20:45.087 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:20:45.087 slat (nsec): min=10292, max=43265, avg=11529.39, stdev=1483.39 00:20:45.087 clat (usec): min=68, max=226, avg=123.21, stdev=34.29 00:20:45.087 lat (usec): min=78, max=237, avg=134.74, stdev=34.35 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 85], 00:20:45.087 | 30.00th=[ 89], 40.00th=[ 101], 50.00th=[ 135], 60.00th=[ 143], 00:20:45.087 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:20:45.087 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 225], 99.95th=[ 227], 00:20:45.087 | 99.99th=[ 227] 00:20:45.087 bw ( KiB/s): min=15688, max=15688, per=24.10%, avg=15688.00, stdev= 0.00, samples=1 00:20:45.087 iops : min= 3922, max= 3922, avg=3922.00, stdev= 0.00, samples=1 00:20:45.087 lat (usec) : 100=36.21%, 250=63.79% 00:20:45.087 cpu : usr=6.00%, sys=8.80%, ctx=7132, majf=0, minf=1 00:20:45.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 issued rwts: total=3547,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:45.087 job3: (groupid=0, jobs=1): err= 0: pid=1720732: Fri Jul 26 21:24:19 2024 00:20:45.087 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:20:45.087 slat (nsec): min=8441, max=48182, avg=10546.48, stdev=2626.01 00:20:45.087 clat (usec): min=66, max=226, avg=148.87, stdev=25.20 00:20:45.087 lat (usec): min=85, max=239, avg=159.41, stdev=24.97 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 86], 5.00th=[ 105], 10.00th=[ 119], 20.00th=[ 130], 00:20:45.087 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 157], 00:20:45.087 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 188], 00:20:45.087 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 223], 99.95th=[ 227], 00:20:45.087 | 99.99th=[ 227] 00:20:45.087 write: IOPS=3221, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:20:45.087 slat (nsec): min=10321, max=42708, avg=12620.45, stdev=2727.89 00:20:45.087 clat (usec): min=69, max=217, avg=140.66, stdev=23.29 00:20:45.087 lat (usec): min=85, max=229, avg=153.28, stdev=23.29 00:20:45.087 clat percentiles (usec): 00:20:45.087 | 1.00th=[ 85], 5.00th=[ 103], 10.00th=[ 113], 20.00th=[ 122], 00:20:45.087 | 30.00th=[ 129], 40.00th=[ 135], 50.00th=[ 143], 60.00th=[ 147], 00:20:45.087 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 182], 00:20:45.087 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 217], 00:20:45.087 | 99.99th=[ 219] 00:20:45.087 bw ( KiB/s): min=12456, max=12456, per=19.14%, avg=12456.00, stdev= 0.00, samples=1 00:20:45.087 iops : min= 3114, max= 3114, avg=3114.00, stdev= 0.00, samples=1 00:20:45.087 lat (usec) : 100=4.24%, 250=95.76% 00:20:45.087 cpu : usr=3.90%, sys=10.20%, ctx=6297, majf=0, minf=1 00:20:45.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.087 issued rwts: total=3072,3225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:45.087 00:20:45.087 Run status group 0 (all jobs): 00:20:45.087 READ: bw=59.8MiB/s (62.7MB/s), 12.0MiB/s-18.0MiB/s (12.6MB/s-18.9MB/s), io=59.9MiB (62.8MB), run=1001-1001msec 00:20:45.087 WRITE: bw=63.6MiB/s (66.6MB/s), 12.6MiB/s-19.9MiB/s (13.2MB/s-20.8MB/s), io=63.6MiB (66.7MB), run=1001-1001msec 00:20:45.087 00:20:45.087 Disk stats (read/write): 00:20:45.087 nvme0n1: ios=3121/3454, merge=0/0, ticks=338/364, in_queue=702, util=82.67% 00:20:45.087 nvme0n2: ios=4096/4380, merge=0/0, ticks=336/351, in_queue=687, util=83.79% 00:20:45.087 nvme0n3: ios=2825/3072, merge=0/0, ticks=348/322, in_queue=670, util=87.94% 00:20:45.087 nvme0n4: ios=2560/2607, merge=0/0, ticks=366/343, in_queue=709, util=89.29% 00:20:45.088 21:24:19 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:45.088 [global] 00:20:45.088 thread=1 00:20:45.088 invalidate=1 00:20:45.088 rw=randwrite 00:20:45.088 time_based=1 00:20:45.088 runtime=1 00:20:45.088 ioengine=libaio 00:20:45.088 direct=1 00:20:45.088 bs=4096 00:20:45.088 iodepth=1 00:20:45.088 norandommap=0 00:20:45.088 numjobs=1 00:20:45.088 00:20:45.088 verify_dump=1 00:20:45.088 verify_backlog=512 00:20:45.088 verify_state_save=0 00:20:45.088 do_verify=1 00:20:45.088 verify=crc32c-intel 00:20:45.088 [job0] 00:20:45.088 filename=/dev/nvme0n1 00:20:45.088 [job1] 00:20:45.088 filename=/dev/nvme0n2 00:20:45.088 [job2] 00:20:45.088 filename=/dev/nvme0n3 00:20:45.088 [job3] 00:20:45.088 filename=/dev/nvme0n4 00:20:45.088 Could not set queue depth (nvme0n1) 00:20:45.088 Could not set queue depth (nvme0n2) 00:20:45.088 Could not set queue depth (nvme0n3) 00:20:45.088 Could not set queue depth (nvme0n4) 00:20:45.347 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.347 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.347 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.347 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.347 fio-3.35 00:20:45.347 Starting 4 threads 00:20:46.722 00:20:46.722 job0: (groupid=0, jobs=1): err= 0: pid=1721158: Fri Jul 26 21:24:21 2024 00:20:46.722 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:20:46.722 slat (nsec): min=8472, max=36195, avg=8996.48, stdev=865.71 00:20:46.722 clat (usec): min=62, max=159, avg=81.93, stdev= 8.76 00:20:46.722 lat (usec): min=75, max=168, avg=90.93, stdev= 8.84 00:20:46.722 clat percentiles (usec): 00:20:46.722 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:20:46.722 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:20:46.722 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 89], 95.00th=[ 93], 00:20:46.722 | 99.00th=[ 126], 99.50th=[ 133], 99.90th=[ 141], 99.95th=[ 145], 00:20:46.722 | 99.99th=[ 159] 00:20:46.722 write: IOPS=5568, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1001msec); 0 zone resets 00:20:46.722 slat (nsec): min=10282, max=47352, avg=10934.45, stdev=1068.02 00:20:46.722 clat (usec): min=58, max=169, avg=81.05, stdev=12.56 00:20:46.722 lat (usec): min=74, max=180, avg=91.98, stdev=12.68 00:20:46.722 clat percentiles (usec): 00:20:46.722 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:20:46.722 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:20:46.722 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 114], 00:20:46.722 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 155], 99.95th=[ 163], 00:20:46.722 | 99.99th=[ 169] 00:20:46.722 bw ( KiB/s): min=21208, max=21208, per=28.35%, avg=21208.00, stdev= 0.00, samples=1 00:20:46.722 iops : min= 5302, max= 5302, avg=5302.00, stdev= 0.00, samples=1 00:20:46.722 lat (usec) : 100=94.11%, 250=5.89% 00:20:46.722 cpu : usr=6.00%, sys=11.90%, ctx=10694, majf=0, minf=1 00:20:46.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.722 issued rwts: total=5120,5574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.722 job1: (groupid=0, jobs=1): err= 0: pid=1721160: Fri Jul 26 21:24:21 2024 00:20:46.722 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec) 00:20:46.722 slat (nsec): min=8962, max=29162, avg=9588.93, stdev=1125.20 00:20:46.722 clat (usec): min=69, max=522, avg=111.70, stdev=21.04 00:20:46.722 lat (usec): min=78, max=535, avg=121.28, stdev=21.14 00:20:46.722 clat percentiles (usec): 00:20:46.722 | 1.00th=[ 74], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 86], 00:20:46.722 | 30.00th=[ 108], 40.00th=[ 114], 50.00th=[ 119], 60.00th=[ 121], 00:20:46.722 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 137], 00:20:46.722 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 188], 00:20:46.722 | 99.99th=[ 523] 00:20:46.722 write: IOPS=4137, BW=16.2MiB/s (16.9MB/s)(16.2MiB/1000msec); 0 zone resets 00:20:46.722 slat (nsec): min=11103, max=43137, avg=12005.57, stdev=1518.23 00:20:46.722 clat (usec): min=66, max=287, avg=103.50, stdev=20.36 00:20:46.722 lat (usec): min=78, max=299, avg=115.51, stdev=20.42 00:20:46.722 clat percentiles (usec): 00:20:46.722 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 80], 00:20:46.722 | 30.00th=[ 85], 40.00th=[ 103], 50.00th=[ 111], 60.00th=[ 115], 00:20:46.722 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 126], 95.00th=[ 130], 00:20:46.722 | 99.00th=[ 143], 99.50th=[ 157], 99.90th=[ 172], 99.95th=[ 178], 00:20:46.722 | 99.99th=[ 289] 00:20:46.722 bw ( KiB/s): min=19432, max=19432, per=25.97%, avg=19432.00, stdev= 0.00, samples=1 00:20:46.722 iops : min= 4858, max= 4858, avg=4858.00, stdev= 0.00, samples=1 00:20:46.722 lat (usec) : 100=32.28%, 250=67.69%, 500=0.01%, 750=0.01% 00:20:46.722 cpu : usr=7.60%, sys=13.10%, ctx=8233, majf=0, minf=1 00:20:46.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.722 issued rwts: total=4096,4137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.722 job2: (groupid=0, jobs=1): err= 0: pid=1721161: Fri Jul 26 21:24:21 2024 00:20:46.722 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:20:46.722 slat (nsec): min=8309, max=35554, avg=9160.50, stdev=862.65 00:20:46.722 clat (usec): min=73, max=182, avg=93.55, stdev=15.12 00:20:46.722 lat (usec): min=82, max=192, avg=102.71, stdev=15.15 00:20:46.723 clat percentiles (usec): 00:20:46.723 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:20:46.723 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:20:46.723 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 122], 95.00th=[ 130], 00:20:46.723 | 99.00th=[ 143], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 176], 00:20:46.723 | 99.99th=[ 184] 00:20:46.723 write: IOPS=5101, BW=19.9MiB/s (20.9MB/s)(19.9MiB/1001msec); 0 zone resets 00:20:46.723 slat (nsec): min=10156, max=37367, avg=10815.92, stdev=1006.12 00:20:46.723 clat (usec): min=69, max=179, avg=88.57, stdev=13.37 00:20:46.723 lat (usec): min=79, max=191, avg=99.39, stdev=13.48 00:20:46.723 clat percentiles (usec): 00:20:46.723 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:20:46.723 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:20:46.723 | 70.00th=[ 89], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 123], 00:20:46.723 | 99.00th=[ 135], 99.50th=[ 145], 99.90th=[ 169], 99.95th=[ 176], 00:20:46.723 | 99.99th=[ 180] 00:20:46.723 bw ( KiB/s): min=20480, max=20480, per=27.37%, avg=20480.00, stdev= 0.00, samples=1 00:20:46.723 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:46.723 lat (usec) : 100=86.32%, 250=13.68% 00:20:46.723 cpu : usr=5.60%, sys=14.40%, ctx=9715, majf=0, minf=2 00:20:46.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.723 issued rwts: total=4608,5107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.723 job3: (groupid=0, jobs=1): err= 0: pid=1721162: Fri Jul 26 21:24:21 2024 00:20:46.723 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:20:46.723 slat (nsec): min=8289, max=28253, avg=9156.02, stdev=1077.81 00:20:46.723 clat (usec): min=79, max=179, avg=123.93, stdev=10.64 00:20:46.723 lat (usec): min=88, max=192, avg=133.09, stdev=10.65 00:20:46.723 clat percentiles (usec): 00:20:46.723 | 1.00th=[ 97], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 117], 00:20:46.723 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 126], 00:20:46.723 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 141], 00:20:46.723 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 180], 00:20:46.723 | 99.99th=[ 180] 00:20:46.723 write: IOPS=3901, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1001msec); 0 zone resets 00:20:46.723 slat (nsec): min=5972, max=41565, avg=10835.07, stdev=1143.56 00:20:46.723 clat (usec): min=71, max=498, avg=119.31, stdev=12.59 00:20:46.723 lat (usec): min=79, max=509, avg=130.14, stdev=12.62 00:20:46.723 clat percentiles (usec): 00:20:46.723 | 1.00th=[ 91], 5.00th=[ 102], 10.00th=[ 108], 20.00th=[ 112], 00:20:46.723 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:20:46.723 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 137], 00:20:46.723 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 180], 00:20:46.723 | 99.99th=[ 498] 00:20:46.723 bw ( KiB/s): min=16384, max=16384, per=21.90%, avg=16384.00, stdev= 0.00, samples=1 00:20:46.723 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:46.723 lat (usec) : 100=2.72%, 250=97.26%, 500=0.01% 00:20:46.723 cpu : usr=6.10%, sys=9.50%, ctx=7490, majf=0, minf=1 00:20:46.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.723 issued rwts: total=3584,3905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.723 00:20:46.723 Run status group 0 (all jobs): 00:20:46.723 READ: bw=67.9MiB/s (71.2MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=68.0MiB (71.3MB), run=1000-1001msec 00:20:46.723 WRITE: bw=73.1MiB/s (76.6MB/s), 15.2MiB/s-21.8MiB/s (16.0MB/s-22.8MB/s), io=73.1MiB (76.7MB), run=1000-1001msec 00:20:46.723 00:20:46.723 Disk stats (read/write): 00:20:46.723 nvme0n1: ios=4294/4608, merge=0/0, ticks=348/326, in_queue=674, util=84.07% 00:20:46.723 nvme0n2: ios=3378/3584, merge=0/0, ticks=338/316, in_queue=654, util=84.90% 00:20:46.723 nvme0n3: ios=3916/4096, merge=0/0, ticks=345/319, in_queue=664, util=88.23% 00:20:46.723 nvme0n4: ios=3072/3147, merge=0/0, ticks=355/352, in_queue=707, util=89.47% 00:20:46.723 21:24:21 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:46.723 [global] 00:20:46.723 thread=1 00:20:46.723 invalidate=1 00:20:46.723 rw=write 00:20:46.723 time_based=1 00:20:46.723 runtime=1 00:20:46.723 ioengine=libaio 00:20:46.723 direct=1 00:20:46.723 bs=4096 00:20:46.723 iodepth=128 00:20:46.723 norandommap=0 00:20:46.723 numjobs=1 00:20:46.723 00:20:46.723 verify_dump=1 00:20:46.723 verify_backlog=512 00:20:46.723 verify_state_save=0 00:20:46.723 do_verify=1 00:20:46.723 verify=crc32c-intel 00:20:46.723 [job0] 00:20:46.723 filename=/dev/nvme0n1 00:20:46.723 [job1] 00:20:46.723 filename=/dev/nvme0n2 00:20:46.723 [job2] 00:20:46.723 filename=/dev/nvme0n3 00:20:46.723 [job3] 00:20:46.723 filename=/dev/nvme0n4 00:20:46.723 Could not set queue depth (nvme0n1) 00:20:46.723 Could not set queue depth (nvme0n2) 00:20:46.723 Could not set queue depth (nvme0n3) 00:20:46.723 Could not set queue depth (nvme0n4) 00:20:46.981 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:46.981 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:46.981 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:46.981 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:46.981 fio-3.35 00:20:46.981 Starting 4 threads 00:20:48.387 00:20:48.387 job0: (groupid=0, jobs=1): err= 0: pid=1721587: Fri Jul 26 21:24:22 2024 00:20:48.387 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:20:48.387 slat (usec): min=2, max=1248, avg=117.02, stdev=300.74 00:20:48.387 clat (usec): min=13676, max=16586, avg=15116.22, stdev=443.82 00:20:48.387 lat (usec): min=13680, max=16788, avg=15233.24, stdev=424.51 00:20:48.387 clat percentiles (usec): 00:20:48.387 | 1.00th=[13960], 5.00th=[14222], 10.00th=[14484], 20.00th=[14746], 00:20:48.387 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15139], 60.00th=[15270], 00:20:48.387 | 70.00th=[15401], 80.00th=[15401], 90.00th=[15533], 95.00th=[15795], 00:20:48.387 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16450], 99.95th=[16581], 00:20:48.387 | 99.99th=[16581] 00:20:48.387 write: IOPS=4550, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1003msec); 0 zone resets 00:20:48.387 slat (usec): min=2, max=1602, avg=111.40, stdev=286.93 00:20:48.387 clat (usec): min=2760, max=17115, avg=14244.53, stdev=1106.72 00:20:48.387 lat (usec): min=3591, max=17118, avg=14355.93, stdev=1101.74 00:20:48.387 clat percentiles (usec): 00:20:48.387 | 1.00th=[ 8094], 5.00th=[13435], 10.00th=[13566], 20.00th=[13960], 00:20:48.387 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14484], 00:20:48.387 | 70.00th=[14615], 80.00th=[14746], 90.00th=[14877], 95.00th=[15008], 00:20:48.387 | 99.00th=[15533], 99.50th=[15664], 99.90th=[16188], 99.95th=[17171], 00:20:48.387 | 99.99th=[17171] 00:20:48.387 bw ( KiB/s): min=17556, max=17904, per=16.47%, avg=17730.00, stdev=246.07, samples=2 00:20:48.387 iops : min= 4389, max= 4476, avg=4432.50, stdev=61.52, samples=2 00:20:48.387 lat (msec) : 4=0.08%, 10=0.77%, 20=99.15% 00:20:48.387 cpu : usr=2.09%, sys=2.79%, ctx=1283, majf=0, minf=1 00:20:48.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:48.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.387 issued rwts: total=4096,4564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.387 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.387 job1: (groupid=0, jobs=1): err= 0: pid=1721592: Fri Jul 26 21:24:22 2024 00:20:48.387 read: IOPS=9698, BW=37.9MiB/s (39.7MB/s)(38.0MiB/1003msec) 00:20:48.387 slat (usec): min=2, max=1598, avg=51.32, stdev=188.18 00:20:48.387 clat (usec): min=4010, max=8370, avg=6711.34, stdev=269.80 00:20:48.387 lat (usec): min=4012, max=8373, avg=6762.66, stdev=248.90 00:20:48.387 clat percentiles (usec): 00:20:48.387 | 1.00th=[ 5800], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 6587], 00:20:48.387 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6783], 00:20:48.387 | 70.00th=[ 6849], 80.00th=[ 6849], 90.00th=[ 6915], 95.00th=[ 6980], 00:20:48.387 | 99.00th=[ 7111], 99.50th=[ 7111], 99.90th=[ 8356], 99.95th=[ 8356], 00:20:48.387 | 99.99th=[ 8356] 00:20:48.387 write: IOPS=9733, BW=38.0MiB/s (39.9MB/s)(38.1MiB/1003msec); 0 zone resets 00:20:48.387 slat (usec): min=2, max=1638, avg=48.72, stdev=177.18 00:20:48.387 clat (usec): min=1684, max=7199, avg=6329.05, stdev=325.25 00:20:48.387 lat (usec): min=2321, max=7595, avg=6377.77, stdev=311.62 00:20:48.387 clat percentiles (usec): 00:20:48.387 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6194], 00:20:48.387 | 30.00th=[ 6325], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6456], 00:20:48.387 | 70.00th=[ 6456], 80.00th=[ 6521], 90.00th=[ 6587], 95.00th=[ 6652], 00:20:48.387 | 99.00th=[ 6718], 99.50th=[ 6783], 99.90th=[ 6849], 99.95th=[ 7111], 00:20:48.387 | 99.99th=[ 7177] 00:20:48.387 bw ( KiB/s): min=37352, max=40391, per=36.11%, avg=38871.50, stdev=2148.90, samples=2 00:20:48.387 iops : min= 9338, max=10097, avg=9717.50, stdev=536.69, samples=2 00:20:48.387 lat (msec) : 2=0.01%, 4=0.17%, 10=99.83% 00:20:48.387 cpu : usr=3.09%, sys=6.19%, ctx=1240, majf=0, minf=1 00:20:48.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:48.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.387 issued rwts: total=9728,9763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.387 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.387 job2: (groupid=0, jobs=1): err= 0: pid=1721593: Fri Jul 26 21:24:22 2024 00:20:48.387 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:20:48.387 slat (usec): min=2, max=1270, avg=117.62, stdev=301.99 00:20:48.387 clat (usec): min=13238, max=16741, avg=15111.18, stdev=476.92 00:20:48.387 lat (usec): min=13332, max=16793, avg=15228.80, stdev=462.91 00:20:48.387 clat percentiles (usec): 00:20:48.387 | 1.00th=[13960], 5.00th=[14222], 10.00th=[14484], 20.00th=[14746], 00:20:48.387 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:20:48.387 | 70.00th=[15401], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:20:48.387 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16581], 99.95th=[16581], 00:20:48.387 | 99.99th=[16712] 00:20:48.387 write: IOPS=4515, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1003msec); 0 zone resets 00:20:48.387 slat (usec): min=2, max=2521, avg=111.65, stdev=287.75 00:20:48.387 clat (usec): min=2762, max=17120, avg=14331.18, stdev=1110.82 00:20:48.387 lat (usec): min=3578, max=17124, avg=14442.83, stdev=1109.58 00:20:48.387 clat percentiles (usec): 00:20:48.387 | 1.00th=[ 8094], 5.00th=[13435], 10.00th=[13698], 20.00th=[14091], 00:20:48.387 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14484], 60.00th=[14615], 00:20:48.387 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15008], 95.00th=[15270], 00:20:48.387 | 99.00th=[15664], 99.50th=[15926], 99.90th=[17171], 99.95th=[17171], 00:20:48.387 | 99.99th=[17171] 00:20:48.387 bw ( KiB/s): min=17368, max=17848, per=16.36%, avg=17608.00, stdev=339.41, samples=2 00:20:48.387 iops : min= 4342, max= 4462, avg=4402.00, stdev=84.85, samples=2 00:20:48.387 lat (msec) : 4=0.13%, 10=0.64%, 20=99.23% 00:20:48.387 cpu : usr=1.90%, sys=3.09%, ctx=1259, majf=0, minf=1 00:20:48.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:48.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.387 issued rwts: total=4096,4529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.387 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.387 job3: (groupid=0, jobs=1): err= 0: pid=1721594: Fri Jul 26 21:24:22 2024 00:20:48.387 read: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec) 00:20:48.387 slat (usec): min=2, max=1447, avg=63.32, stdev=241.28 00:20:48.387 clat (usec): min=6500, max=8955, avg=8223.15, stdev=291.45 00:20:48.387 lat (usec): min=7372, max=9463, avg=8286.47, stdev=169.23 00:20:48.387 clat percentiles (usec): 00:20:48.387 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8094], 00:20:48.387 | 30.00th=[ 8160], 40.00th=[ 8225], 50.00th=[ 8291], 60.00th=[ 8356], 00:20:48.387 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8455], 95.00th=[ 8586], 00:20:48.387 | 99.00th=[ 8586], 99.50th=[ 8586], 99.90th=[ 8717], 99.95th=[ 8717], 00:20:48.387 | 99.99th=[ 8979] 00:20:48.387 write: IOPS=8129, BW=31.8MiB/s (33.3MB/s)(31.8MiB/1001msec); 0 zone resets 00:20:48.388 slat (usec): min=2, max=1421, avg=59.96, stdev=226.85 00:20:48.388 clat (usec): min=609, max=9285, avg=7817.62, stdev=517.58 00:20:48.388 lat (usec): min=1871, max=9386, avg=7877.58, stdev=468.43 00:20:48.388 clat percentiles (usec): 00:20:48.388 | 1.00th=[ 6063], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7701], 00:20:48.388 | 30.00th=[ 7767], 40.00th=[ 7832], 50.00th=[ 7898], 60.00th=[ 7963], 00:20:48.388 | 70.00th=[ 7963], 80.00th=[ 8029], 90.00th=[ 8160], 95.00th=[ 8291], 00:20:48.388 | 99.00th=[ 8455], 99.50th=[ 8455], 99.90th=[ 9241], 99.95th=[ 9241], 00:20:48.388 | 99.99th=[ 9241] 00:20:48.388 bw ( KiB/s): min=32702, max=32702, per=30.38%, avg=32702.00, stdev= 0.00, samples=1 00:20:48.388 iops : min= 8175, max= 8175, avg=8175.00, stdev= 0.00, samples=1 00:20:48.388 lat (usec) : 750=0.01% 00:20:48.388 lat (msec) : 2=0.08%, 4=0.20%, 10=99.72% 00:20:48.388 cpu : usr=3.60%, sys=5.30%, ctx=1023, majf=0, minf=1 00:20:48.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:48.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.388 issued rwts: total=7680,8138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.388 00:20:48.388 Run status group 0 (all jobs): 00:20:48.388 READ: bw=99.7MiB/s (105MB/s), 16.0MiB/s-37.9MiB/s (16.7MB/s-39.7MB/s), io=100MiB (105MB), run=1001-1003msec 00:20:48.388 WRITE: bw=105MiB/s (110MB/s), 17.6MiB/s-38.0MiB/s (18.5MB/s-39.9MB/s), io=105MiB (111MB), run=1001-1003msec 00:20:48.388 00:20:48.388 Disk stats (read/write): 00:20:48.388 nvme0n1: ios=3633/3592, merge=0/0, ticks=17805/16955, in_queue=34760, util=84.57% 00:20:48.388 nvme0n2: ios=8027/8192, merge=0/0, ticks=26511/25315, in_queue=51826, util=85.12% 00:20:48.388 nvme0n3: ios=3566/3584, merge=0/0, ticks=17749/17035, in_queue=34784, util=88.36% 00:20:48.388 nvme0n4: ios=6537/6656, merge=0/0, ticks=17444/16505, in_queue=33949, util=89.40% 00:20:48.388 21:24:22 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:48.388 [global] 00:20:48.388 thread=1 00:20:48.388 invalidate=1 00:20:48.388 rw=randwrite 00:20:48.388 time_based=1 00:20:48.388 runtime=1 00:20:48.388 ioengine=libaio 00:20:48.388 direct=1 00:20:48.388 bs=4096 00:20:48.388 iodepth=128 00:20:48.388 norandommap=0 00:20:48.388 numjobs=1 00:20:48.388 00:20:48.388 verify_dump=1 00:20:48.388 verify_backlog=512 00:20:48.388 verify_state_save=0 00:20:48.388 do_verify=1 00:20:48.388 verify=crc32c-intel 00:20:48.388 [job0] 00:20:48.388 filename=/dev/nvme0n1 00:20:48.388 [job1] 00:20:48.388 filename=/dev/nvme0n2 00:20:48.388 [job2] 00:20:48.388 filename=/dev/nvme0n3 00:20:48.388 [job3] 00:20:48.388 filename=/dev/nvme0n4 00:20:48.388 Could not set queue depth (nvme0n1) 00:20:48.388 Could not set queue depth (nvme0n2) 00:20:48.388 Could not set queue depth (nvme0n3) 00:20:48.388 Could not set queue depth (nvme0n4) 00:20:48.649 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.649 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.649 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.649 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.649 fio-3.35 00:20:48.649 Starting 4 threads 00:20:50.029 00:20:50.029 job0: (groupid=0, jobs=1): err= 0: pid=1722014: Fri Jul 26 21:24:24 2024 00:20:50.029 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:20:50.029 slat (nsec): min=1983, max=2482.3k, avg=114892.46, stdev=366569.56 00:20:50.029 clat (usec): min=9116, max=17893, avg=14912.45, stdev=3095.89 00:20:50.029 lat (usec): min=10077, max=17895, avg=15027.34, stdev=3100.53 00:20:50.029 clat percentiles (usec): 00:20:50.029 | 1.00th=[10159], 5.00th=[10552], 10.00th=[10683], 20.00th=[10945], 00:20:50.029 | 30.00th=[11076], 40.00th=[16450], 50.00th=[17171], 60.00th=[17171], 00:20:50.029 | 70.00th=[17433], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:20:50.029 | 99.00th=[17957], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:20:50.029 | 99.99th=[17957] 00:20:50.029 write: IOPS=4520, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1003msec); 0 zone resets 00:20:50.029 slat (usec): min=2, max=2403, avg=112.58, stdev=364.02 00:20:50.029 clat (usec): min=2779, max=19853, avg=14526.28, stdev=3344.30 00:20:50.029 lat (usec): min=3411, max=19856, avg=14638.87, stdev=3349.97 00:20:50.029 clat percentiles (usec): 00:20:50.029 | 1.00th=[ 7701], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10421], 00:20:50.029 | 30.00th=[10552], 40.00th=[16319], 50.00th=[16909], 60.00th=[16909], 00:20:50.029 | 70.00th=[17171], 80.00th=[17171], 90.00th=[17171], 95.00th=[17433], 00:20:50.029 | 99.00th=[17695], 99.50th=[17695], 99.90th=[19268], 99.95th=[19268], 00:20:50.029 | 99.99th=[19792] 00:20:50.029 bw ( KiB/s): min=14776, max=20480, per=20.22%, avg=17628.00, stdev=4033.34, samples=2 00:20:50.029 iops : min= 3694, max= 5120, avg=4407.00, stdev=1008.33, samples=2 00:20:50.029 lat (msec) : 4=0.16%, 10=3.40%, 20=96.44% 00:20:50.029 cpu : usr=2.69%, sys=3.49%, ctx=1665, majf=0, minf=1 00:20:50.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:50.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:50.029 issued rwts: total=4096,4534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:50.029 job1: (groupid=0, jobs=1): err= 0: pid=1722015: Fri Jul 26 21:24:24 2024 00:20:50.029 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:20:50.029 slat (usec): min=2, max=1558, avg=114.93, stdev=285.53 00:20:50.029 clat (usec): min=9177, max=18082, avg=14911.57, stdev=3080.78 00:20:50.029 lat (usec): min=9573, max=18086, avg=15026.50, stdev=3094.12 00:20:50.029 clat percentiles (usec): 00:20:50.029 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[10945], 00:20:50.030 | 30.00th=[11076], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:20:50.030 | 70.00th=[17433], 80.00th=[17433], 90.00th=[17433], 95.00th=[17695], 00:20:50.030 | 99.00th=[17957], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:20:50.030 | 99.99th=[17957] 00:20:50.030 write: IOPS=4516, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1003msec); 0 zone resets 00:20:50.030 slat (usec): min=2, max=1664, avg=113.07, stdev=280.04 00:20:50.030 clat (usec): min=2298, max=19294, avg=14526.82, stdev=3329.07 00:20:50.030 lat (usec): min=2829, max=19298, avg=14639.89, stdev=3341.92 00:20:50.030 clat percentiles (usec): 00:20:50.030 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:20:50.030 | 30.00th=[10552], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:20:50.030 | 70.00th=[17171], 80.00th=[17171], 90.00th=[17171], 95.00th=[17433], 00:20:50.030 | 99.00th=[17695], 99.50th=[17695], 99.90th=[19268], 99.95th=[19268], 00:20:50.030 | 99.99th=[19268] 00:20:50.030 bw ( KiB/s): min=14744, max=20480, per=20.20%, avg=17612.00, stdev=4055.96, samples=2 00:20:50.030 iops : min= 3686, max= 5120, avg=4403.00, stdev=1013.99, samples=2 00:20:50.030 lat (msec) : 4=0.14%, 10=4.31%, 20=95.55% 00:20:50.030 cpu : usr=2.79%, sys=3.79%, ctx=2214, majf=0, minf=1 00:20:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:50.030 issued rwts: total=4096,4530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:50.030 job2: (groupid=0, jobs=1): err= 0: pid=1722016: Fri Jul 26 21:24:24 2024 00:20:50.030 read: IOPS=8428, BW=32.9MiB/s (34.5MB/s)(33.0MiB/1001msec) 00:20:50.030 slat (usec): min=2, max=996, avg=57.01, stdev=193.72 00:20:50.030 clat (usec): min=433, max=13733, avg=7409.94, stdev=2219.19 00:20:50.030 lat (usec): min=1176, max=13745, avg=7466.95, stdev=2228.72 00:20:50.030 clat percentiles (usec): 00:20:50.030 | 1.00th=[ 5473], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6521], 00:20:50.030 | 30.00th=[ 6652], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6718], 00:20:50.030 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[13042], 95.00th=[13304], 00:20:50.030 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:20:50.030 | 99.99th=[13698] 00:20:50.030 write: IOPS=8695, BW=34.0MiB/s (35.6MB/s)(34.0MiB/1001msec); 0 zone resets 00:20:50.030 slat (usec): min=2, max=1355, avg=56.33, stdev=185.81 00:20:50.030 clat (usec): min=5303, max=13270, avg=7356.66, stdev=2307.02 00:20:50.030 lat (usec): min=5624, max=13279, avg=7412.99, stdev=2318.47 00:20:50.030 clat percentiles (usec): 00:20:50.030 | 1.00th=[ 5538], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6194], 00:20:50.030 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6390], 00:20:50.030 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[12518], 95.00th=[12649], 00:20:50.030 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13304], 99.95th=[13304], 00:20:50.030 | 99.99th=[13304] 00:20:50.030 bw ( KiB/s): min=28840, max=28840, per=33.08%, avg=28840.00, stdev= 0.00, samples=1 00:20:50.030 iops : min= 7210, max= 7210, avg=7210.00, stdev= 0.00, samples=1 00:20:50.030 lat (usec) : 500=0.01% 00:20:50.030 lat (msec) : 2=0.09%, 4=0.28%, 10=85.22%, 20=14.40% 00:20:50.030 cpu : usr=3.10%, sys=7.40%, ctx=1309, majf=0, minf=1 00:20:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:50.030 issued rwts: total=8437,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:50.030 job3: (groupid=0, jobs=1): err= 0: pid=1722017: Fri Jul 26 21:24:24 2024 00:20:50.030 read: IOPS=3984, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1003msec) 00:20:50.030 slat (usec): min=2, max=2554, avg=123.45, stdev=389.07 00:20:50.030 clat (usec): min=1571, max=18366, avg=15836.58, stdev=2089.94 00:20:50.030 lat (usec): min=2830, max=18369, avg=15960.04, stdev=2063.95 00:20:50.030 clat percentiles (usec): 00:20:50.030 | 1.00th=[ 9110], 5.00th=[12911], 10.00th=[13173], 20.00th=[13435], 00:20:50.030 | 30.00th=[15401], 40.00th=[16909], 50.00th=[16909], 60.00th=[17171], 00:20:50.030 | 70.00th=[17171], 80.00th=[17171], 90.00th=[17433], 95.00th=[17433], 00:20:50.030 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17695], 99.95th=[18482], 00:20:50.030 | 99.99th=[18482] 00:20:50.030 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:20:50.030 slat (usec): min=2, max=3159, avg=119.76, stdev=377.35 00:20:50.030 clat (usec): min=9765, max=17877, avg=15507.40, stdev=2321.13 00:20:50.030 lat (usec): min=9967, max=18076, avg=15627.16, stdev=2310.12 00:20:50.030 clat percentiles (usec): 00:20:50.030 | 1.00th=[11338], 5.00th=[12125], 10.00th=[12387], 20.00th=[12518], 00:20:50.030 | 30.00th=[12780], 40.00th=[16057], 50.00th=[17171], 60.00th=[17433], 00:20:50.030 | 70.00th=[17433], 80.00th=[17433], 90.00th=[17433], 95.00th=[17695], 00:20:50.030 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:20:50.030 | 99.99th=[17957] 00:20:50.030 bw ( KiB/s): min=15136, max=17632, per=18.79%, avg=16384.00, stdev=1764.94, samples=2 00:20:50.030 iops : min= 3784, max= 4408, avg=4096.00, stdev=441.23, samples=2 00:20:50.030 lat (msec) : 2=0.01%, 4=0.05%, 10=0.77%, 20=99.17% 00:20:50.030 cpu : usr=1.00%, sys=5.49%, ctx=2897, majf=0, minf=1 00:20:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:50.030 issued rwts: total=3996,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:50.030 00:20:50.030 Run status group 0 (all jobs): 00:20:50.030 READ: bw=80.3MiB/s (84.2MB/s), 15.6MiB/s-32.9MiB/s (16.3MB/s-34.5MB/s), io=80.6MiB (84.5MB), run=1001-1003msec 00:20:50.030 WRITE: bw=85.2MiB/s (89.3MB/s), 16.0MiB/s-34.0MiB/s (16.7MB/s-35.6MB/s), io=85.4MiB (89.6MB), run=1001-1003msec 00:20:50.030 00:20:50.030 Disk stats (read/write): 00:20:50.030 nvme0n1: ios=3633/3779, merge=0/0, ticks=12975/13242, in_queue=26217, util=84.67% 00:20:50.030 nvme0n2: ios=3584/3776, merge=0/0, ticks=12972/13183, in_queue=26155, util=85.22% 00:20:50.030 nvme0n3: ios=6724/7168, merge=0/0, ticks=12703/13446, in_queue=26149, util=88.37% 00:20:50.030 nvme0n4: ios=3255/3584, merge=0/0, ticks=12782/13516, in_queue=26298, util=89.41% 00:20:50.030 21:24:24 -- target/fio.sh@55 -- # sync 00:20:50.030 21:24:24 -- target/fio.sh@59 -- # fio_pid=1722286 00:20:50.030 21:24:24 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:50.030 21:24:24 -- target/fio.sh@61 -- # sleep 3 00:20:50.030 [global] 00:20:50.030 thread=1 00:20:50.030 invalidate=1 00:20:50.030 rw=read 00:20:50.030 time_based=1 00:20:50.030 runtime=10 00:20:50.030 ioengine=libaio 00:20:50.030 direct=1 00:20:50.030 bs=4096 00:20:50.030 iodepth=1 00:20:50.030 norandommap=1 00:20:50.030 numjobs=1 00:20:50.030 00:20:50.030 [job0] 00:20:50.030 filename=/dev/nvme0n1 00:20:50.030 [job1] 00:20:50.030 filename=/dev/nvme0n2 00:20:50.030 [job2] 00:20:50.030 filename=/dev/nvme0n3 00:20:50.030 [job3] 00:20:50.030 filename=/dev/nvme0n4 00:20:50.030 Could not set queue depth (nvme0n1) 00:20:50.030 Could not set queue depth (nvme0n2) 00:20:50.030 Could not set queue depth (nvme0n3) 00:20:50.030 Could not set queue depth (nvme0n4) 00:20:50.291 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.291 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.291 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.291 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.291 fio-3.35 00:20:50.291 Starting 4 threads 00:20:52.812 21:24:27 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:53.069 21:24:27 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:53.069 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=78667776, buflen=4096 00:20:53.069 fio: pid=1722448, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:53.069 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=111083520, buflen=4096 00:20:53.069 fio: pid=1722447, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:53.069 21:24:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:53.069 21:24:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:53.326 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=59478016, buflen=4096 00:20:53.326 fio: pid=1722445, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:53.326 21:24:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:53.327 21:24:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:53.584 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=36888576, buflen=4096 00:20:53.584 fio: pid=1722446, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:53.584 21:24:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:53.584 21:24:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:53.584 00:20:53.584 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1722445: Fri Jul 26 21:24:28 2024 00:20:53.584 read: IOPS=10.3k, BW=40.3MiB/s (42.2MB/s)(121MiB/2999msec) 00:20:53.584 slat (usec): min=6, max=32078, avg=11.02, stdev=217.69 00:20:53.584 clat (usec): min=45, max=27115, avg=84.34, stdev=154.36 00:20:53.584 lat (usec): min=57, max=32165, avg=95.36, stdev=266.89 00:20:53.584 clat percentiles (usec): 00:20:53.584 | 1.00th=[ 60], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 76], 00:20:53.584 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:20:53.584 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 95], 95.00th=[ 121], 00:20:53.584 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 159], 99.95th=[ 163], 00:20:53.584 | 99.99th=[ 178] 00:20:53.584 bw ( KiB/s): min=34536, max=44440, per=33.05%, avg=42401.60, stdev=4397.67, samples=5 00:20:53.584 iops : min= 8634, max=11110, avg=10600.40, stdev=1099.42, samples=5 00:20:53.584 lat (usec) : 50=0.03%, 100=91.09%, 250=8.88% 00:20:53.584 lat (msec) : 50=0.01% 00:20:53.584 cpu : usr=4.37%, sys=14.51%, ctx=30912, majf=0, minf=1 00:20:53.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 issued rwts: total=30906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.584 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1722446: Fri Jul 26 21:24:28 2024 00:20:53.584 read: IOPS=7934, BW=31.0MiB/s (32.5MB/s)(99.2MiB/3200msec) 00:20:53.584 slat (usec): min=8, max=17813, avg=12.44, stdev=228.48 00:20:53.584 clat (usec): min=45, max=31126, avg=112.02, stdev=196.75 00:20:53.584 lat (usec): min=57, max=31135, avg=124.46, stdev=301.22 00:20:53.584 clat percentiles (usec): 00:20:53.584 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 76], 00:20:53.584 | 30.00th=[ 111], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 125], 00:20:53.584 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 141], 00:20:53.584 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 194], 00:20:53.584 | 99.99th=[ 310] 00:20:53.584 bw ( KiB/s): min=29232, max=37203, per=24.03%, avg=30824.50, stdev=3143.93, samples=6 00:20:53.584 iops : min= 7308, max= 9300, avg=7706.00, stdev=785.68, samples=6 00:20:53.584 lat (usec) : 50=0.02%, 100=26.81%, 250=73.15%, 500=0.01% 00:20:53.584 lat (msec) : 50=0.01% 00:20:53.584 cpu : usr=3.97%, sys=10.97%, ctx=25397, majf=0, minf=1 00:20:53.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 issued rwts: total=25391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.584 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1722447: Fri Jul 26 21:24:28 2024 00:20:53.584 read: IOPS=9624, BW=37.6MiB/s (39.4MB/s)(106MiB/2818msec) 00:20:53.584 slat (usec): min=6, max=13907, avg=10.09, stdev=118.70 00:20:53.584 clat (usec): min=56, max=288, avg=91.88, stdev=12.17 00:20:53.584 lat (usec): min=64, max=14014, avg=101.97, stdev=119.41 00:20:53.584 clat percentiles (usec): 00:20:53.584 | 1.00th=[ 75], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:20:53.584 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:20:53.584 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 122], 00:20:53.584 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 151], 99.95th=[ 157], 00:20:53.584 | 99.99th=[ 169] 00:20:53.584 bw ( KiB/s): min=33232, max=40512, per=30.37%, avg=38956.80, stdev=3201.41, samples=5 00:20:53.584 iops : min= 8308, max=10128, avg=9739.20, stdev=800.35, samples=5 00:20:53.584 lat (usec) : 100=84.75%, 250=15.24%, 500=0.01% 00:20:53.584 cpu : usr=3.87%, sys=13.99%, ctx=27123, majf=0, minf=1 00:20:53.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 issued rwts: total=27121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.584 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1722448: Fri Jul 26 21:24:28 2024 00:20:53.584 read: IOPS=7286, BW=28.5MiB/s (29.8MB/s)(75.0MiB/2636msec) 00:20:53.584 slat (nsec): min=8277, max=36671, avg=9192.20, stdev=993.92 00:20:53.584 clat (usec): min=74, max=321, avg=125.15, stdev=11.74 00:20:53.584 lat (usec): min=82, max=330, avg=134.35, stdev=11.73 00:20:53.584 clat percentiles (usec): 00:20:53.584 | 1.00th=[ 92], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 118], 00:20:53.584 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 127], 00:20:53.584 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 143], 00:20:53.584 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 182], 00:20:53.584 | 99.99th=[ 310] 00:20:53.584 bw ( KiB/s): min=29232, max=30224, per=23.03%, avg=29547.20, stdev=387.67, samples=5 00:20:53.584 iops : min= 7308, max= 7556, avg=7386.80, stdev=96.92, samples=5 00:20:53.584 lat (usec) : 100=2.40%, 250=97.58%, 500=0.01% 00:20:53.584 cpu : usr=3.38%, sys=10.63%, ctx=19207, majf=0, minf=2 00:20:53.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.584 issued rwts: total=19207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.585 00:20:53.585 Run status group 0 (all jobs): 00:20:53.585 READ: bw=125MiB/s (131MB/s), 28.5MiB/s-40.3MiB/s (29.8MB/s-42.2MB/s), io=401MiB (420MB), run=2636-3200msec 00:20:53.585 00:20:53.585 Disk stats (read/write): 00:20:53.585 nvme0n1: ios=29240/0, merge=0/0, ticks=2169/0, in_queue=2169, util=91.98% 00:20:53.585 nvme0n2: ios=23909/0, merge=0/0, ticks=2540/0, in_queue=2540, util=93.12% 00:20:53.585 nvme0n3: ios=25212/0, merge=0/0, ticks=2169/0, in_queue=2169, util=96.07% 00:20:53.585 nvme0n4: ios=19093/0, merge=0/0, ticks=2213/0, in_queue=2213, util=96.46% 00:20:53.842 21:24:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:53.842 21:24:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:53.842 21:24:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:53.842 21:24:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:54.099 21:24:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:54.099 21:24:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:54.356 21:24:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:54.357 21:24:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:54.613 21:24:29 -- target/fio.sh@69 -- # fio_status=0 00:20:54.613 21:24:29 -- target/fio.sh@70 -- # wait 1722286 00:20:54.613 21:24:29 -- target/fio.sh@70 -- # fio_status=4 00:20:54.613 21:24:29 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:55.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:55.542 21:24:30 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:55.542 21:24:30 -- common/autotest_common.sh@1198 -- # local i=0 00:20:55.542 21:24:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:55.542 21:24:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:55.542 21:24:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:55.542 21:24:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:55.542 21:24:30 -- common/autotest_common.sh@1210 -- # return 0 00:20:55.542 21:24:30 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:55.542 21:24:30 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:55.542 nvmf hotplug test: fio failed as expected 00:20:55.542 21:24:30 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.799 21:24:30 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:55.799 21:24:30 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:55.799 21:24:30 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:55.799 21:24:30 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:55.799 21:24:30 -- target/fio.sh@91 -- # nvmftestfini 00:20:55.799 21:24:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:55.799 21:24:30 -- nvmf/common.sh@116 -- # sync 00:20:55.799 21:24:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:55.799 21:24:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:55.799 21:24:30 -- nvmf/common.sh@119 -- # set +e 00:20:55.799 21:24:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:55.799 21:24:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:55.799 rmmod nvme_rdma 00:20:55.799 rmmod nvme_fabrics 00:20:55.799 21:24:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:55.799 21:24:30 -- nvmf/common.sh@123 -- # set -e 00:20:55.799 21:24:30 -- nvmf/common.sh@124 -- # return 0 00:20:55.799 21:24:30 -- nvmf/common.sh@477 -- # '[' -n 1719174 ']' 00:20:55.799 21:24:30 -- nvmf/common.sh@478 -- # killprocess 1719174 00:20:55.799 21:24:30 -- common/autotest_common.sh@926 -- # '[' -z 1719174 ']' 00:20:55.799 21:24:30 -- common/autotest_common.sh@930 -- # kill -0 1719174 00:20:55.799 21:24:30 -- common/autotest_common.sh@931 -- # uname 00:20:55.799 21:24:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:55.799 21:24:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1719174 00:20:55.799 21:24:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:55.799 21:24:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:55.799 21:24:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1719174' 00:20:55.799 killing process with pid 1719174 00:20:55.799 21:24:30 -- common/autotest_common.sh@945 -- # kill 1719174 00:20:55.799 21:24:30 -- common/autotest_common.sh@950 -- # wait 1719174 00:20:56.056 21:24:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:56.056 21:24:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:56.056 00:20:56.056 real 0m28.051s 00:20:56.056 user 2m5.954s 00:20:56.056 sys 0m11.513s 00:20:56.056 21:24:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.056 21:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:56.056 ************************************ 00:20:56.056 END TEST nvmf_fio_target 00:20:56.056 ************************************ 00:20:56.056 21:24:30 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:56.056 21:24:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:56.056 21:24:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:56.056 21:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:56.056 ************************************ 00:20:56.056 START TEST nvmf_bdevio 00:20:56.056 ************************************ 00:20:56.056 21:24:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:56.313 * Looking for test storage... 00:20:56.313 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:56.313 21:24:30 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.313 21:24:30 -- nvmf/common.sh@7 -- # uname -s 00:20:56.313 21:24:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.313 21:24:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.313 21:24:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.313 21:24:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.313 21:24:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.313 21:24:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.313 21:24:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.313 21:24:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.313 21:24:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.313 21:24:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.313 21:24:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:56.313 21:24:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:56.313 21:24:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.313 21:24:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.313 21:24:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.313 21:24:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:56.314 21:24:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.314 21:24:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.314 21:24:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.314 21:24:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.314 21:24:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.314 21:24:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.314 21:24:30 -- paths/export.sh@5 -- # export PATH 00:20:56.314 21:24:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.314 21:24:30 -- nvmf/common.sh@46 -- # : 0 00:20:56.314 21:24:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:56.314 21:24:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:56.314 21:24:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:56.314 21:24:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.314 21:24:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.314 21:24:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:56.314 21:24:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:56.314 21:24:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:56.314 21:24:30 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:56.314 21:24:30 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:56.314 21:24:30 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:56.314 21:24:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:56.314 21:24:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.314 21:24:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:56.314 21:24:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:56.314 21:24:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:56.314 21:24:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.314 21:24:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.314 21:24:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.314 21:24:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:56.314 21:24:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:56.314 21:24:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:56.314 21:24:30 -- common/autotest_common.sh@10 -- # set +x 00:21:04.485 21:24:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:04.485 21:24:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:04.485 21:24:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:04.485 21:24:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:04.485 21:24:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:04.485 21:24:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:04.485 21:24:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:04.485 21:24:38 -- nvmf/common.sh@294 -- # net_devs=() 00:21:04.485 21:24:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:04.485 21:24:38 -- nvmf/common.sh@295 -- # e810=() 00:21:04.485 21:24:38 -- nvmf/common.sh@295 -- # local -ga e810 00:21:04.485 21:24:38 -- nvmf/common.sh@296 -- # x722=() 00:21:04.485 21:24:38 -- nvmf/common.sh@296 -- # local -ga x722 00:21:04.485 21:24:38 -- nvmf/common.sh@297 -- # mlx=() 00:21:04.485 21:24:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:04.485 21:24:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.485 21:24:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:04.485 21:24:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:04.485 21:24:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:04.485 21:24:38 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:04.485 21:24:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:04.485 21:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:04.485 21:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:04.485 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:04.485 21:24:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:04.485 21:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:04.485 21:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:04.485 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:04.485 21:24:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:04.485 21:24:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:04.485 21:24:38 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:04.485 21:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.485 21:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:04.485 21:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.485 21:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:04.485 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:04.485 21:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.485 21:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:04.485 21:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.485 21:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:04.485 21:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.485 21:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:04.485 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:04.485 21:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.485 21:24:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:04.485 21:24:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:04.485 21:24:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:04.485 21:24:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:04.485 21:24:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:04.485 21:24:38 -- nvmf/common.sh@57 -- # uname 00:21:04.485 21:24:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:04.485 21:24:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:04.485 21:24:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:04.485 21:24:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:04.485 21:24:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:04.485 21:24:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:04.485 21:24:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:04.485 21:24:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:04.485 21:24:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:04.485 21:24:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:04.485 21:24:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:04.485 21:24:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:04.485 21:24:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:04.485 21:24:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:04.485 21:24:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:04.485 21:24:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:04.485 21:24:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.485 21:24:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.485 21:24:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:04.485 21:24:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:04.485 21:24:39 -- nvmf/common.sh@104 -- # continue 2 00:21:04.485 21:24:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.485 21:24:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.485 21:24:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:04.485 21:24:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.485 21:24:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:04.485 21:24:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:04.485 21:24:39 -- nvmf/common.sh@104 -- # continue 2 00:21:04.485 21:24:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:04.485 21:24:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:04.485 21:24:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:04.485 21:24:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:04.485 21:24:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.486 21:24:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:04.486 21:24:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:04.486 21:24:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:04.486 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:04.486 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:04.486 altname enp217s0f0np0 00:21:04.486 altname ens818f0np0 00:21:04.486 inet 192.168.100.8/24 scope global mlx_0_0 00:21:04.486 valid_lft forever preferred_lft forever 00:21:04.486 21:24:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:04.486 21:24:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:04.486 21:24:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.486 21:24:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:04.486 21:24:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:04.486 21:24:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:04.486 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:04.486 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:04.486 altname enp217s0f1np1 00:21:04.486 altname ens818f1np1 00:21:04.486 inet 192.168.100.9/24 scope global mlx_0_1 00:21:04.486 valid_lft forever preferred_lft forever 00:21:04.486 21:24:39 -- nvmf/common.sh@410 -- # return 0 00:21:04.486 21:24:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:04.486 21:24:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:04.486 21:24:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:04.486 21:24:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:04.486 21:24:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:04.486 21:24:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:04.486 21:24:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:04.486 21:24:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:04.486 21:24:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:04.486 21:24:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:04.486 21:24:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.486 21:24:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.486 21:24:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:04.486 21:24:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:04.486 21:24:39 -- nvmf/common.sh@104 -- # continue 2 00:21:04.486 21:24:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.486 21:24:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.486 21:24:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:04.486 21:24:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.486 21:24:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:04.486 21:24:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:04.486 21:24:39 -- nvmf/common.sh@104 -- # continue 2 00:21:04.486 21:24:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:04.486 21:24:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:04.486 21:24:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.486 21:24:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:04.486 21:24:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:04.486 21:24:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.486 21:24:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.486 21:24:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:04.486 192.168.100.9' 00:21:04.486 21:24:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:04.486 192.168.100.9' 00:21:04.486 21:24:39 -- nvmf/common.sh@445 -- # head -n 1 00:21:04.486 21:24:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:04.486 21:24:39 -- nvmf/common.sh@446 -- # tail -n +2 00:21:04.486 21:24:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:04.486 192.168.100.9' 00:21:04.486 21:24:39 -- nvmf/common.sh@446 -- # head -n 1 00:21:04.486 21:24:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:04.486 21:24:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:04.486 21:24:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:04.486 21:24:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:04.486 21:24:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:04.486 21:24:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:04.486 21:24:39 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:04.486 21:24:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:04.486 21:24:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:04.486 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:21:04.486 21:24:39 -- nvmf/common.sh@469 -- # nvmfpid=1727474 00:21:04.486 21:24:39 -- nvmf/common.sh@470 -- # waitforlisten 1727474 00:21:04.486 21:24:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:04.486 21:24:39 -- common/autotest_common.sh@819 -- # '[' -z 1727474 ']' 00:21:04.486 21:24:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.486 21:24:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.486 21:24:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.486 21:24:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.486 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:21:04.486 [2024-07-26 21:24:39.301409] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:04.486 [2024-07-26 21:24:39.301465] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.486 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.743 [2024-07-26 21:24:39.387322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.743 [2024-07-26 21:24:39.423685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:04.743 [2024-07-26 21:24:39.423800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.743 [2024-07-26 21:24:39.423809] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.743 [2024-07-26 21:24:39.423817] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.743 [2024-07-26 21:24:39.423936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:04.743 [2024-07-26 21:24:39.424347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:04.743 [2024-07-26 21:24:39.424435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.743 [2024-07-26 21:24:39.424436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:05.305 21:24:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:05.305 21:24:40 -- common/autotest_common.sh@852 -- # return 0 00:21:05.305 21:24:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:05.305 21:24:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:05.305 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:05.305 21:24:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.305 21:24:40 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:05.305 21:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.305 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:05.562 [2024-07-26 21:24:40.177606] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9fc940/0xa00e30) succeed. 00:21:05.562 [2024-07-26 21:24:40.187965] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9fdf30/0xa424c0) succeed. 00:21:05.562 21:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.562 21:24:40 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:05.562 21:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.562 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:05.562 Malloc0 00:21:05.562 21:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.562 21:24:40 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.562 21:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.562 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:05.562 21:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.562 21:24:40 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.562 21:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.562 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:05.562 21:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.562 21:24:40 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:05.562 21:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.562 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:05.562 [2024-07-26 21:24:40.353651] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:05.562 21:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.562 21:24:40 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:05.562 21:24:40 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:05.562 21:24:40 -- nvmf/common.sh@520 -- # config=() 00:21:05.562 21:24:40 -- nvmf/common.sh@520 -- # local subsystem config 00:21:05.562 21:24:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:05.562 21:24:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:05.562 { 00:21:05.562 "params": { 00:21:05.562 "name": "Nvme$subsystem", 00:21:05.562 "trtype": "$TEST_TRANSPORT", 00:21:05.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.562 "adrfam": "ipv4", 00:21:05.562 "trsvcid": "$NVMF_PORT", 00:21:05.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.562 "hdgst": ${hdgst:-false}, 00:21:05.562 "ddgst": ${ddgst:-false} 00:21:05.562 }, 00:21:05.562 "method": "bdev_nvme_attach_controller" 00:21:05.562 } 00:21:05.562 EOF 00:21:05.562 )") 00:21:05.562 21:24:40 -- nvmf/common.sh@542 -- # cat 00:21:05.562 21:24:40 -- nvmf/common.sh@544 -- # jq . 00:21:05.562 21:24:40 -- nvmf/common.sh@545 -- # IFS=, 00:21:05.562 21:24:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:05.562 "params": { 00:21:05.562 "name": "Nvme1", 00:21:05.562 "trtype": "rdma", 00:21:05.562 "traddr": "192.168.100.8", 00:21:05.562 "adrfam": "ipv4", 00:21:05.562 "trsvcid": "4420", 00:21:05.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.562 "hdgst": false, 00:21:05.562 "ddgst": false 00:21:05.562 }, 00:21:05.562 "method": "bdev_nvme_attach_controller" 00:21:05.562 }' 00:21:05.562 [2024-07-26 21:24:40.402817] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:05.562 [2024-07-26 21:24:40.402873] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727717 ] 00:21:05.819 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.819 [2024-07-26 21:24:40.489801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:05.819 [2024-07-26 21:24:40.528150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.819 [2024-07-26 21:24:40.528246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.819 [2024-07-26 21:24:40.528248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.076 [2024-07-26 21:24:40.699224] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:06.076 [2024-07-26 21:24:40.699256] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:06.076 I/O targets: 00:21:06.076 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:06.076 00:21:06.076 00:21:06.076 CUnit - A unit testing framework for C - Version 2.1-3 00:21:06.076 http://cunit.sourceforge.net/ 00:21:06.076 00:21:06.076 00:21:06.076 Suite: bdevio tests on: Nvme1n1 00:21:06.076 Test: blockdev write read block ...passed 00:21:06.076 Test: blockdev write zeroes read block ...passed 00:21:06.076 Test: blockdev write zeroes read no split ...passed 00:21:06.076 Test: blockdev write zeroes read split ...passed 00:21:06.076 Test: blockdev write zeroes read split partial ...passed 00:21:06.076 Test: blockdev reset ...[2024-07-26 21:24:40.729156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.076 [2024-07-26 21:24:40.751614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:06.076 [2024-07-26 21:24:40.778429] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:06.076 passed 00:21:06.076 Test: blockdev write read 8 blocks ...passed 00:21:06.076 Test: blockdev write read size > 128k ...passed 00:21:06.076 Test: blockdev write read invalid size ...passed 00:21:06.076 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:06.076 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:06.076 Test: blockdev write read max offset ...passed 00:21:06.076 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:06.076 Test: blockdev writev readv 8 blocks ...passed 00:21:06.076 Test: blockdev writev readv 30 x 1block ...passed 00:21:06.076 Test: blockdev writev readv block ...passed 00:21:06.076 Test: blockdev writev readv size > 128k ...passed 00:21:06.076 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:06.076 Test: blockdev comparev and writev ...[2024-07-26 21:24:40.781346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.781391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.781574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.781596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.781777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.781799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.781961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.781981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.076 [2024-07-26 21:24:40.781991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:06.076 passed 00:21:06.076 Test: blockdev nvme passthru rw ...passed 00:21:06.076 Test: blockdev nvme passthru vendor specific ...[2024-07-26 21:24:40.782248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:06.076 [2024-07-26 21:24:40.782259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:06.076 [2024-07-26 21:24:40.782305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:06.077 [2024-07-26 21:24:40.782315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:06.077 [2024-07-26 21:24:40.782360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:06.077 [2024-07-26 21:24:40.782370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:06.077 [2024-07-26 21:24:40.782411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:06.077 [2024-07-26 21:24:40.782421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:06.077 passed 00:21:06.077 Test: blockdev nvme admin passthru ...passed 00:21:06.077 Test: blockdev copy ...passed 00:21:06.077 00:21:06.077 Run Summary: Type Total Ran Passed Failed Inactive 00:21:06.077 suites 1 1 n/a 0 0 00:21:06.077 tests 23 23 23 0 0 00:21:06.077 asserts 152 152 152 0 n/a 00:21:06.077 00:21:06.077 Elapsed time = 0.169 seconds 00:21:06.334 21:24:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.334 21:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.334 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:06.334 21:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.334 21:24:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:06.334 21:24:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:06.334 21:24:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:06.334 21:24:40 -- nvmf/common.sh@116 -- # sync 00:21:06.334 21:24:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:06.334 21:24:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:06.334 21:24:40 -- nvmf/common.sh@119 -- # set +e 00:21:06.334 21:24:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:06.334 21:24:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:06.334 rmmod nvme_rdma 00:21:06.334 rmmod nvme_fabrics 00:21:06.334 21:24:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:06.334 21:24:41 -- nvmf/common.sh@123 -- # set -e 00:21:06.334 21:24:41 -- nvmf/common.sh@124 -- # return 0 00:21:06.334 21:24:41 -- nvmf/common.sh@477 -- # '[' -n 1727474 ']' 00:21:06.334 21:24:41 -- nvmf/common.sh@478 -- # killprocess 1727474 00:21:06.334 21:24:41 -- common/autotest_common.sh@926 -- # '[' -z 1727474 ']' 00:21:06.334 21:24:41 -- common/autotest_common.sh@930 -- # kill -0 1727474 00:21:06.334 21:24:41 -- common/autotest_common.sh@931 -- # uname 00:21:06.334 21:24:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:06.334 21:24:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1727474 00:21:06.334 21:24:41 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:06.334 21:24:41 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:06.334 21:24:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1727474' 00:21:06.334 killing process with pid 1727474 00:21:06.334 21:24:41 -- common/autotest_common.sh@945 -- # kill 1727474 00:21:06.334 21:24:41 -- common/autotest_common.sh@950 -- # wait 1727474 00:21:06.592 21:24:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:06.592 21:24:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:06.592 00:21:06.592 real 0m10.499s 00:21:06.592 user 0m10.994s 00:21:06.592 sys 0m6.895s 00:21:06.592 21:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.592 21:24:41 -- common/autotest_common.sh@10 -- # set +x 00:21:06.592 ************************************ 00:21:06.592 END TEST nvmf_bdevio 00:21:06.592 ************************************ 00:21:06.592 21:24:41 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:21:06.592 21:24:41 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:06.592 21:24:41 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:06.592 21:24:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:06.592 21:24:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:06.592 21:24:41 -- common/autotest_common.sh@10 -- # set +x 00:21:06.592 ************************************ 00:21:06.592 START TEST nvmf_fuzz 00:21:06.592 ************************************ 00:21:06.592 21:24:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:06.850 * Looking for test storage... 00:21:06.850 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:06.850 21:24:41 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.850 21:24:41 -- nvmf/common.sh@7 -- # uname -s 00:21:06.850 21:24:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.850 21:24:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.850 21:24:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.850 21:24:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.850 21:24:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.850 21:24:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.850 21:24:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.850 21:24:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.850 21:24:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.850 21:24:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.850 21:24:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:06.850 21:24:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:06.850 21:24:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.850 21:24:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.850 21:24:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.850 21:24:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:06.850 21:24:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.850 21:24:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.850 21:24:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.850 21:24:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.850 21:24:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.851 21:24:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.851 21:24:41 -- paths/export.sh@5 -- # export PATH 00:21:06.851 21:24:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.851 21:24:41 -- nvmf/common.sh@46 -- # : 0 00:21:06.851 21:24:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:06.851 21:24:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:06.851 21:24:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:06.851 21:24:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.851 21:24:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.851 21:24:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:06.851 21:24:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:06.851 21:24:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:06.851 21:24:41 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:06.851 21:24:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:06.851 21:24:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.851 21:24:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:06.851 21:24:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:06.851 21:24:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:06.851 21:24:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.851 21:24:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.851 21:24:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.851 21:24:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:06.851 21:24:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:06.851 21:24:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:06.851 21:24:41 -- common/autotest_common.sh@10 -- # set +x 00:21:14.955 21:24:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:14.955 21:24:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:14.955 21:24:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:14.955 21:24:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:14.955 21:24:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:14.955 21:24:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:14.955 21:24:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:14.955 21:24:49 -- nvmf/common.sh@294 -- # net_devs=() 00:21:14.955 21:24:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:14.955 21:24:49 -- nvmf/common.sh@295 -- # e810=() 00:21:14.955 21:24:49 -- nvmf/common.sh@295 -- # local -ga e810 00:21:14.955 21:24:49 -- nvmf/common.sh@296 -- # x722=() 00:21:14.955 21:24:49 -- nvmf/common.sh@296 -- # local -ga x722 00:21:14.955 21:24:49 -- nvmf/common.sh@297 -- # mlx=() 00:21:14.955 21:24:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:14.955 21:24:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.955 21:24:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:14.955 21:24:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:14.955 21:24:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:14.955 21:24:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:14.955 21:24:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:14.955 21:24:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:14.955 21:24:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:14.955 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:14.955 21:24:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:14.955 21:24:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:14.955 21:24:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:14.955 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:14.955 21:24:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:14.955 21:24:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:14.955 21:24:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:14.955 21:24:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:14.955 21:24:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.955 21:24:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:14.955 21:24:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.955 21:24:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:14.955 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:14.955 21:24:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.955 21:24:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:14.955 21:24:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.955 21:24:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:14.955 21:24:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.955 21:24:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:14.956 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:14.956 21:24:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.956 21:24:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:14.956 21:24:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:14.956 21:24:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:14.956 21:24:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:14.956 21:24:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:14.956 21:24:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:14.956 21:24:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:14.956 21:24:49 -- nvmf/common.sh@57 -- # uname 00:21:14.956 21:24:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:14.956 21:24:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:14.956 21:24:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:14.956 21:24:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:14.956 21:24:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:14.956 21:24:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:14.956 21:24:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:14.956 21:24:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:14.956 21:24:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:14.956 21:24:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:14.956 21:24:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:14.956 21:24:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:14.956 21:24:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:14.956 21:24:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:14.956 21:24:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:15.213 21:24:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:15.213 21:24:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:15.213 21:24:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.213 21:24:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:15.213 21:24:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:15.213 21:24:49 -- nvmf/common.sh@104 -- # continue 2 00:21:15.213 21:24:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@104 -- # continue 2 00:21:15.214 21:24:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:15.214 21:24:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:15.214 21:24:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:15.214 21:24:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:15.214 21:24:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:15.214 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:15.214 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:15.214 altname enp217s0f0np0 00:21:15.214 altname ens818f0np0 00:21:15.214 inet 192.168.100.8/24 scope global mlx_0_0 00:21:15.214 valid_lft forever preferred_lft forever 00:21:15.214 21:24:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:15.214 21:24:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:15.214 21:24:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:15.214 21:24:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:15.214 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:15.214 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:15.214 altname enp217s0f1np1 00:21:15.214 altname ens818f1np1 00:21:15.214 inet 192.168.100.9/24 scope global mlx_0_1 00:21:15.214 valid_lft forever preferred_lft forever 00:21:15.214 21:24:49 -- nvmf/common.sh@410 -- # return 0 00:21:15.214 21:24:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:15.214 21:24:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:15.214 21:24:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:15.214 21:24:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:15.214 21:24:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:15.214 21:24:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:15.214 21:24:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:15.214 21:24:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:15.214 21:24:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:15.214 21:24:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:15.214 21:24:49 -- nvmf/common.sh@104 -- # continue 2 00:21:15.214 21:24:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.214 21:24:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:15.214 21:24:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@104 -- # continue 2 00:21:15.214 21:24:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:15.214 21:24:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:15.214 21:24:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:15.214 21:24:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:15.214 21:24:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:15.214 21:24:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:15.214 21:24:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:15.214 192.168.100.9' 00:21:15.214 21:24:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:15.214 192.168.100.9' 00:21:15.214 21:24:49 -- nvmf/common.sh@445 -- # head -n 1 00:21:15.214 21:24:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:15.214 21:24:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:15.214 192.168.100.9' 00:21:15.214 21:24:49 -- nvmf/common.sh@446 -- # tail -n +2 00:21:15.214 21:24:49 -- nvmf/common.sh@446 -- # head -n 1 00:21:15.214 21:24:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:15.214 21:24:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:15.214 21:24:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:15.214 21:24:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:15.214 21:24:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:15.214 21:24:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:15.214 21:24:49 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1731948 00:21:15.214 21:24:49 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:15.214 21:24:49 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:15.214 21:24:49 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1731948 00:21:15.214 21:24:49 -- common/autotest_common.sh@819 -- # '[' -z 1731948 ']' 00:21:15.214 21:24:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.214 21:24:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:15.214 21:24:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.214 21:24:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:15.214 21:24:49 -- common/autotest_common.sh@10 -- # set +x 00:21:16.147 21:24:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:16.147 21:24:50 -- common/autotest_common.sh@852 -- # return 0 00:21:16.147 21:24:50 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:16.147 21:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.147 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.147 21:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.147 21:24:50 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:16.147 21:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.147 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.147 Malloc0 00:21:16.147 21:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.147 21:24:50 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.147 21:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.147 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.147 21:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.147 21:24:50 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.147 21:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.147 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.147 21:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.147 21:24:50 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:16.147 21:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.147 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:21:16.147 21:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.147 21:24:50 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:21:16.147 21:24:50 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:48.201 Fuzzing completed. Shutting down the fuzz application 00:21:48.201 00:21:48.201 Dumping successful admin opcodes: 00:21:48.201 8, 9, 10, 24, 00:21:48.201 Dumping successful io opcodes: 00:21:48.201 0, 9, 00:21:48.201 NS: 0x200003af1f00 I/O qp, Total commands completed: 984023, total successful commands: 5767, random_seed: 2613682688 00:21:48.201 NS: 0x200003af1f00 admin qp, Total commands completed: 124336, total successful commands: 1021, random_seed: 329323008 00:21:48.201 21:25:21 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:48.201 Fuzzing completed. Shutting down the fuzz application 00:21:48.201 00:21:48.201 Dumping successful admin opcodes: 00:21:48.201 24, 00:21:48.201 Dumping successful io opcodes: 00:21:48.201 00:21:48.201 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 816306190 00:21:48.201 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 816380692 00:21:48.201 21:25:22 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.201 21:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.201 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:21:48.201 21:25:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.201 21:25:23 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:48.201 21:25:23 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:48.201 21:25:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:48.201 21:25:23 -- nvmf/common.sh@116 -- # sync 00:21:48.201 21:25:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:48.201 21:25:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:48.201 21:25:23 -- nvmf/common.sh@119 -- # set +e 00:21:48.201 21:25:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:48.201 21:25:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:48.201 rmmod nvme_rdma 00:21:48.201 rmmod nvme_fabrics 00:21:48.201 21:25:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:48.201 21:25:23 -- nvmf/common.sh@123 -- # set -e 00:21:48.201 21:25:23 -- nvmf/common.sh@124 -- # return 0 00:21:48.201 21:25:23 -- nvmf/common.sh@477 -- # '[' -n 1731948 ']' 00:21:48.201 21:25:23 -- nvmf/common.sh@478 -- # killprocess 1731948 00:21:48.201 21:25:23 -- common/autotest_common.sh@926 -- # '[' -z 1731948 ']' 00:21:48.201 21:25:23 -- common/autotest_common.sh@930 -- # kill -0 1731948 00:21:48.201 21:25:23 -- common/autotest_common.sh@931 -- # uname 00:21:48.201 21:25:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.201 21:25:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1731948 00:21:48.459 21:25:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:48.459 21:25:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:48.459 21:25:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1731948' 00:21:48.459 killing process with pid 1731948 00:21:48.459 21:25:23 -- common/autotest_common.sh@945 -- # kill 1731948 00:21:48.459 21:25:23 -- common/autotest_common.sh@950 -- # wait 1731948 00:21:48.715 21:25:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:48.715 21:25:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:48.715 21:25:23 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:48.715 00:21:48.715 real 0m41.992s 00:21:48.715 user 0m50.967s 00:21:48.715 sys 0m22.597s 00:21:48.715 21:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.715 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:21:48.715 ************************************ 00:21:48.715 END TEST nvmf_fuzz 00:21:48.715 ************************************ 00:21:48.715 21:25:23 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:48.715 21:25:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:48.715 21:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:48.715 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:21:48.715 ************************************ 00:21:48.715 START TEST nvmf_multiconnection 00:21:48.715 ************************************ 00:21:48.715 21:25:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:48.715 * Looking for test storage... 00:21:48.715 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:48.715 21:25:23 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.715 21:25:23 -- nvmf/common.sh@7 -- # uname -s 00:21:48.715 21:25:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.715 21:25:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.715 21:25:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.715 21:25:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.715 21:25:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.715 21:25:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.715 21:25:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.715 21:25:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.715 21:25:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.716 21:25:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.716 21:25:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.716 21:25:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:48.716 21:25:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.716 21:25:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.716 21:25:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.716 21:25:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:48.716 21:25:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.716 21:25:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.716 21:25:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.716 21:25:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.716 21:25:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.716 21:25:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.716 21:25:23 -- paths/export.sh@5 -- # export PATH 00:21:48.716 21:25:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.716 21:25:23 -- nvmf/common.sh@46 -- # : 0 00:21:48.716 21:25:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:48.716 21:25:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:48.716 21:25:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:48.716 21:25:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.716 21:25:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.716 21:25:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:48.716 21:25:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:48.716 21:25:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:48.716 21:25:23 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:48.716 21:25:23 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:48.716 21:25:23 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:48.716 21:25:23 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:48.716 21:25:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:48.716 21:25:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.716 21:25:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:48.716 21:25:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:48.716 21:25:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:48.716 21:25:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.716 21:25:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.716 21:25:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.716 21:25:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:48.716 21:25:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:48.716 21:25:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:48.716 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:21:56.883 21:25:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:56.884 21:25:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:56.884 21:25:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:56.884 21:25:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:56.884 21:25:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:56.884 21:25:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:56.884 21:25:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:56.884 21:25:31 -- nvmf/common.sh@294 -- # net_devs=() 00:21:56.884 21:25:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:56.884 21:25:31 -- nvmf/common.sh@295 -- # e810=() 00:21:56.884 21:25:31 -- nvmf/common.sh@295 -- # local -ga e810 00:21:56.884 21:25:31 -- nvmf/common.sh@296 -- # x722=() 00:21:56.884 21:25:31 -- nvmf/common.sh@296 -- # local -ga x722 00:21:56.884 21:25:31 -- nvmf/common.sh@297 -- # mlx=() 00:21:56.884 21:25:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:56.884 21:25:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.884 21:25:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:56.884 21:25:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:56.884 21:25:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:56.884 21:25:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:56.884 21:25:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:56.884 21:25:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:56.884 21:25:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:56.884 21:25:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:56.884 21:25:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:56.884 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:56.884 21:25:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:56.884 21:25:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:56.884 21:25:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:56.884 21:25:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:56.884 21:25:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:56.884 21:25:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:56.884 21:25:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:56.884 21:25:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:56.884 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:56.884 21:25:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:57.142 21:25:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:57.142 21:25:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.142 21:25:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.142 21:25:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.142 21:25:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.142 21:25:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:57.142 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:57.142 21:25:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.142 21:25:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.142 21:25:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.142 21:25:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.142 21:25:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.142 21:25:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:57.142 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:57.142 21:25:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.142 21:25:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:57.142 21:25:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:57.142 21:25:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:57.142 21:25:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:57.142 21:25:31 -- nvmf/common.sh@57 -- # uname 00:21:57.142 21:25:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:57.142 21:25:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:57.142 21:25:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:57.142 21:25:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:57.142 21:25:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:57.142 21:25:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:57.142 21:25:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:57.142 21:25:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:57.142 21:25:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:57.142 21:25:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:57.142 21:25:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:57.142 21:25:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:57.142 21:25:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:57.142 21:25:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:57.142 21:25:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:57.142 21:25:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:57.142 21:25:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:57.142 21:25:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:57.142 21:25:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:57.142 21:25:31 -- nvmf/common.sh@104 -- # continue 2 00:21:57.142 21:25:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:57.142 21:25:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:57.142 21:25:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:57.142 21:25:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:57.142 21:25:31 -- nvmf/common.sh@104 -- # continue 2 00:21:57.142 21:25:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:57.142 21:25:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:57.142 21:25:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:57.142 21:25:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:57.142 21:25:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:57.142 21:25:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:57.142 21:25:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:57.142 21:25:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:57.142 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:57.142 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:57.142 altname enp217s0f0np0 00:21:57.142 altname ens818f0np0 00:21:57.142 inet 192.168.100.8/24 scope global mlx_0_0 00:21:57.142 valid_lft forever preferred_lft forever 00:21:57.142 21:25:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:57.142 21:25:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:57.142 21:25:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:57.142 21:25:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:57.142 21:25:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:57.142 21:25:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:57.142 21:25:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:57.142 21:25:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:57.142 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:57.142 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:57.142 altname enp217s0f1np1 00:21:57.142 altname ens818f1np1 00:21:57.142 inet 192.168.100.9/24 scope global mlx_0_1 00:21:57.142 valid_lft forever preferred_lft forever 00:21:57.142 21:25:31 -- nvmf/common.sh@410 -- # return 0 00:21:57.142 21:25:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:57.142 21:25:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:57.142 21:25:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:57.142 21:25:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:57.142 21:25:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:57.142 21:25:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:57.142 21:25:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:57.142 21:25:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:57.142 21:25:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:57.143 21:25:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:57.143 21:25:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:57.143 21:25:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:57.143 21:25:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:57.143 21:25:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:57.143 21:25:31 -- nvmf/common.sh@104 -- # continue 2 00:21:57.143 21:25:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:57.143 21:25:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:57.143 21:25:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:57.143 21:25:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:57.143 21:25:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:57.143 21:25:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:57.143 21:25:31 -- nvmf/common.sh@104 -- # continue 2 00:21:57.143 21:25:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:57.143 21:25:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:57.143 21:25:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:57.143 21:25:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:57.143 21:25:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:57.143 21:25:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:57.143 21:25:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:57.143 21:25:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:57.143 21:25:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:57.143 21:25:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:57.143 21:25:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:57.143 21:25:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:57.143 21:25:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:57.143 192.168.100.9' 00:21:57.143 21:25:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:57.143 192.168.100.9' 00:21:57.143 21:25:31 -- nvmf/common.sh@445 -- # head -n 1 00:21:57.143 21:25:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:57.143 21:25:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:57.143 192.168.100.9' 00:21:57.143 21:25:31 -- nvmf/common.sh@446 -- # tail -n +2 00:21:57.143 21:25:31 -- nvmf/common.sh@446 -- # head -n 1 00:21:57.143 21:25:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:57.143 21:25:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:57.143 21:25:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:57.143 21:25:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:57.143 21:25:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:57.143 21:25:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:57.143 21:25:32 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:57.143 21:25:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:57.143 21:25:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:57.143 21:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:57.143 21:25:32 -- nvmf/common.sh@469 -- # nvmfpid=1741530 00:21:57.143 21:25:32 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:57.143 21:25:32 -- nvmf/common.sh@470 -- # waitforlisten 1741530 00:21:57.143 21:25:32 -- common/autotest_common.sh@819 -- # '[' -z 1741530 ']' 00:21:57.401 21:25:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.401 21:25:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.401 21:25:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.401 21:25:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.401 21:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:57.401 [2024-07-26 21:25:32.056288] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:57.401 [2024-07-26 21:25:32.056348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.401 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.401 [2024-07-26 21:25:32.144014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.401 [2024-07-26 21:25:32.184809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.401 [2024-07-26 21:25:32.184914] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.401 [2024-07-26 21:25:32.184925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.401 [2024-07-26 21:25:32.184934] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.401 [2024-07-26 21:25:32.184983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.401 [2024-07-26 21:25:32.185077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.401 [2024-07-26 21:25:32.185138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.401 [2024-07-26 21:25:32.185139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.329 21:25:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.329 21:25:32 -- common/autotest_common.sh@852 -- # return 0 00:21:58.329 21:25:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.329 21:25:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:58.329 21:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.329 21:25:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.329 21:25:32 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:58.329 21:25:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.329 21:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.329 [2024-07-26 21:25:32.939791] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1db3060/0x1db7550) succeed. 00:21:58.329 [2024-07-26 21:25:32.950016] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1db4650/0x1df8be0) succeed. 00:21:58.329 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.329 21:25:33 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:58.329 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.329 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:58.329 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.329 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.329 Malloc1 00:21:58.329 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.329 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:58.329 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.329 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.329 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.329 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:58.329 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.329 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.329 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.329 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:58.329 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.329 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.329 [2024-07-26 21:25:33.120655] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:58.330 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.330 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.330 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:58.330 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.330 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.330 Malloc2 00:21:58.330 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.330 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:58.330 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.330 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.330 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.330 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:58.330 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.330 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.330 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.330 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:58.330 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.330 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.330 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.330 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.330 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:58.330 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.330 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.330 Malloc3 00:21:58.330 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.330 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:58.330 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.330 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.330 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.330 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:58.330 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.330 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.586 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 Malloc4 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.586 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 Malloc5 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.586 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 Malloc6 00:21:58.586 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.586 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:58.586 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.587 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 Malloc7 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.587 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 Malloc8 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.587 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 Malloc9 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.587 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:58.587 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.587 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.843 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 Malloc10 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.843 21:25:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 Malloc11 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:58.843 21:25:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.843 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:58.843 21:25:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.843 21:25:33 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:58.843 21:25:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.843 21:25:33 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:59.773 21:25:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:59.773 21:25:34 -- common/autotest_common.sh@1177 -- # local i=0 00:21:59.773 21:25:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.773 21:25:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:59.773 21:25:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:02.298 21:25:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:02.298 21:25:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:02.298 21:25:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:02.298 21:25:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:02.298 21:25:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:02.298 21:25:36 -- common/autotest_common.sh@1187 -- # return 0 00:22:02.298 21:25:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.299 21:25:36 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:22:02.864 21:25:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:02.864 21:25:37 -- common/autotest_common.sh@1177 -- # local i=0 00:22:02.864 21:25:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.864 21:25:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:02.864 21:25:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:04.761 21:25:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:04.761 21:25:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:04.761 21:25:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:04.761 21:25:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:04.761 21:25:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.761 21:25:39 -- common/autotest_common.sh@1187 -- # return 0 00:22:04.761 21:25:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.761 21:25:39 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:05.695 21:25:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:05.695 21:25:40 -- common/autotest_common.sh@1177 -- # local i=0 00:22:05.695 21:25:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.695 21:25:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:05.695 21:25:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:08.221 21:25:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:08.221 21:25:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:08.221 21:25:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:08.221 21:25:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:08.221 21:25:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.221 21:25:42 -- common/autotest_common.sh@1187 -- # return 0 00:22:08.221 21:25:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.221 21:25:42 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:08.786 21:25:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:08.786 21:25:43 -- common/autotest_common.sh@1177 -- # local i=0 00:22:08.786 21:25:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.786 21:25:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:08.786 21:25:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:11.308 21:25:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:11.308 21:25:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:11.308 21:25:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:11.308 21:25:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:11.308 21:25:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:11.308 21:25:45 -- common/autotest_common.sh@1187 -- # return 0 00:22:11.308 21:25:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.308 21:25:45 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:22:11.872 21:25:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:11.872 21:25:46 -- common/autotest_common.sh@1177 -- # local i=0 00:22:11.872 21:25:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:11.872 21:25:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:11.872 21:25:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:13.767 21:25:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:13.767 21:25:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:13.767 21:25:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:13.767 21:25:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:13.767 21:25:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:13.767 21:25:48 -- common/autotest_common.sh@1187 -- # return 0 00:22:13.767 21:25:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.767 21:25:48 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:22:14.699 21:25:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:14.699 21:25:49 -- common/autotest_common.sh@1177 -- # local i=0 00:22:14.699 21:25:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:14.699 21:25:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:14.699 21:25:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:17.268 21:25:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:17.268 21:25:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:17.268 21:25:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:17.268 21:25:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:17.268 21:25:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.268 21:25:51 -- common/autotest_common.sh@1187 -- # return 0 00:22:17.268 21:25:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.268 21:25:51 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:22:17.833 21:25:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:17.833 21:25:52 -- common/autotest_common.sh@1177 -- # local i=0 00:22:17.833 21:25:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:17.833 21:25:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:17.833 21:25:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:19.730 21:25:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:19.730 21:25:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:19.730 21:25:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:19.730 21:25:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:19.730 21:25:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:19.730 21:25:54 -- common/autotest_common.sh@1187 -- # return 0 00:22:19.730 21:25:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:19.730 21:25:54 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:22:21.101 21:25:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:21.102 21:25:55 -- common/autotest_common.sh@1177 -- # local i=0 00:22:21.102 21:25:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.102 21:25:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:21.102 21:25:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:22.999 21:25:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:22.999 21:25:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:22.999 21:25:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:22.999 21:25:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:22.999 21:25:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:22.999 21:25:57 -- common/autotest_common.sh@1187 -- # return 0 00:22:22.999 21:25:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:22.999 21:25:57 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:22:23.932 21:25:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:23.932 21:25:58 -- common/autotest_common.sh@1177 -- # local i=0 00:22:23.932 21:25:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:23.932 21:25:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:23.932 21:25:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:25.830 21:26:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:25.830 21:26:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:25.830 21:26:00 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:25.830 21:26:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:25.830 21:26:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:25.830 21:26:00 -- common/autotest_common.sh@1187 -- # return 0 00:22:25.830 21:26:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:25.830 21:26:00 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:22:26.762 21:26:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:26.762 21:26:01 -- common/autotest_common.sh@1177 -- # local i=0 00:22:26.762 21:26:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:26.762 21:26:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:26.762 21:26:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:29.287 21:26:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:29.287 21:26:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:29.287 21:26:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:29.287 21:26:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:29.287 21:26:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.287 21:26:03 -- common/autotest_common.sh@1187 -- # return 0 00:22:29.287 21:26:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.287 21:26:03 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:29.852 21:26:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:29.852 21:26:04 -- common/autotest_common.sh@1177 -- # local i=0 00:22:29.852 21:26:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:29.852 21:26:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:29.852 21:26:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:31.745 21:26:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:31.745 21:26:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:31.745 21:26:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:31.745 21:26:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:31.745 21:26:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:31.745 21:26:06 -- common/autotest_common.sh@1187 -- # return 0 00:22:31.745 21:26:06 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:32.002 [global] 00:22:32.002 thread=1 00:22:32.002 invalidate=1 00:22:32.002 rw=read 00:22:32.002 time_based=1 00:22:32.002 runtime=10 00:22:32.002 ioengine=libaio 00:22:32.002 direct=1 00:22:32.002 bs=262144 00:22:32.002 iodepth=64 00:22:32.002 norandommap=1 00:22:32.002 numjobs=1 00:22:32.002 00:22:32.002 [job0] 00:22:32.002 filename=/dev/nvme0n1 00:22:32.002 [job1] 00:22:32.002 filename=/dev/nvme10n1 00:22:32.002 [job2] 00:22:32.002 filename=/dev/nvme1n1 00:22:32.002 [job3] 00:22:32.002 filename=/dev/nvme2n1 00:22:32.002 [job4] 00:22:32.002 filename=/dev/nvme3n1 00:22:32.002 [job5] 00:22:32.002 filename=/dev/nvme4n1 00:22:32.002 [job6] 00:22:32.002 filename=/dev/nvme5n1 00:22:32.002 [job7] 00:22:32.002 filename=/dev/nvme6n1 00:22:32.002 [job8] 00:22:32.002 filename=/dev/nvme7n1 00:22:32.002 [job9] 00:22:32.002 filename=/dev/nvme8n1 00:22:32.002 [job10] 00:22:32.002 filename=/dev/nvme9n1 00:22:32.002 Could not set queue depth (nvme0n1) 00:22:32.002 Could not set queue depth (nvme10n1) 00:22:32.002 Could not set queue depth (nvme1n1) 00:22:32.002 Could not set queue depth (nvme2n1) 00:22:32.002 Could not set queue depth (nvme3n1) 00:22:32.002 Could not set queue depth (nvme4n1) 00:22:32.002 Could not set queue depth (nvme5n1) 00:22:32.002 Could not set queue depth (nvme6n1) 00:22:32.002 Could not set queue depth (nvme7n1) 00:22:32.002 Could not set queue depth (nvme8n1) 00:22:32.002 Could not set queue depth (nvme9n1) 00:22:32.569 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.569 fio-3.35 00:22:32.569 Starting 11 threads 00:22:44.797 00:22:44.797 job0: (groupid=0, jobs=1): err= 0: pid=1747863: Fri Jul 26 21:26:17 2024 00:22:44.797 read: IOPS=1459, BW=365MiB/s (383MB/s)(3665MiB/10044msec) 00:22:44.797 slat (usec): min=12, max=14291, avg=678.36, stdev=1643.34 00:22:44.797 clat (usec): min=9465, max=91441, avg=43130.92, stdev=8349.70 00:22:44.797 lat (usec): min=9725, max=91466, avg=43809.27, stdev=8571.77 00:22:44.797 clat percentiles (usec): 00:22:44.797 | 1.00th=[28705], 5.00th=[29754], 10.00th=[30802], 20.00th=[32113], 00:22:44.797 | 30.00th=[43779], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:22:44.797 | 70.00th=[47449], 80.00th=[47973], 90.00th=[49546], 95.00th=[52691], 00:22:44.797 | 99.00th=[64226], 99.50th=[65799], 99.90th=[82314], 99.95th=[85459], 00:22:44.797 | 99.99th=[87557] 00:22:44.797 bw ( KiB/s): min=278016, max=516096, per=9.13%, avg=373675.95, stdev=67094.06, samples=20 00:22:44.797 iops : min= 1086, max= 2016, avg=1459.65, stdev=262.06, samples=20 00:22:44.797 lat (msec) : 10=0.02%, 20=0.38%, 50=90.76%, 100=8.83% 00:22:44.797 cpu : usr=0.55%, sys=6.42%, ctx=2805, majf=0, minf=4097 00:22:44.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:44.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.797 issued rwts: total=14658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.797 job1: (groupid=0, jobs=1): err= 0: pid=1747866: Fri Jul 26 21:26:17 2024 00:22:44.797 read: IOPS=1397, BW=349MiB/s (366MB/s)(3509MiB/10043msec) 00:22:44.797 slat (usec): min=13, max=14391, avg=708.40, stdev=1659.37 00:22:44.797 clat (usec): min=10289, max=86579, avg=45040.88, stdev=6407.11 00:22:44.797 lat (usec): min=10552, max=86617, avg=45749.28, stdev=6624.49 00:22:44.797 clat percentiles (usec): 00:22:44.797 | 1.00th=[28967], 5.00th=[30802], 10.00th=[32375], 20.00th=[44827], 00:22:44.797 | 30.00th=[45351], 40.00th=[45876], 50.00th=[45876], 60.00th=[46924], 00:22:44.797 | 70.00th=[47449], 80.00th=[47973], 90.00th=[49546], 95.00th=[51643], 00:22:44.797 | 99.00th=[63701], 99.50th=[65274], 99.90th=[76022], 99.95th=[82314], 00:22:44.797 | 99.99th=[86508] 00:22:44.797 bw ( KiB/s): min=318976, max=470016, per=8.74%, avg=357708.80, stdev=39587.72, samples=20 00:22:44.797 iops : min= 1246, max= 1836, avg=1397.30, stdev=154.64, samples=20 00:22:44.797 lat (msec) : 20=0.27%, 50=91.26%, 100=8.47% 00:22:44.797 cpu : usr=0.60%, sys=6.38%, ctx=2699, majf=0, minf=4097 00:22:44.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:44.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=14036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job2: (groupid=0, jobs=1): err= 0: pid=1747867: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1276, BW=319MiB/s (335MB/s)(3205MiB/10044msec) 00:22:44.798 slat (usec): min=12, max=24725, avg=755.12, stdev=2003.97 00:22:44.798 clat (usec): min=11682, max=92389, avg=49327.46, stdev=7341.77 00:22:44.798 lat (usec): min=12120, max=92428, avg=50082.58, stdev=7636.02 00:22:44.798 clat percentiles (usec): 00:22:44.798 | 1.00th=[30278], 5.00th=[41157], 10.00th=[45351], 20.00th=[45876], 00:22:44.798 | 30.00th=[46924], 40.00th=[46924], 50.00th=[47449], 60.00th=[48497], 00:22:44.798 | 70.00th=[49021], 80.00th=[51119], 90.00th=[62129], 95.00th=[64226], 00:22:44.798 | 99.00th=[68682], 99.50th=[70779], 99.90th=[85459], 99.95th=[86508], 00:22:44.798 | 99.99th=[90702] 00:22:44.798 bw ( KiB/s): min=272384, max=363520, per=7.98%, avg=326634.40, stdev=24717.14, samples=20 00:22:44.798 iops : min= 1064, max= 1420, avg=1275.90, stdev=96.57, samples=20 00:22:44.798 lat (msec) : 20=0.28%, 50=75.93%, 100=23.79% 00:22:44.798 cpu : usr=0.47%, sys=5.64%, ctx=2763, majf=0, minf=4097 00:22:44.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=12821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job3: (groupid=0, jobs=1): err= 0: pid=1747868: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1297, BW=324MiB/s (340MB/s)(3259MiB/10044msec) 00:22:44.798 slat (usec): min=11, max=20174, avg=747.34, stdev=1770.30 00:22:44.798 clat (usec): min=11372, max=89483, avg=48511.64, stdev=6313.77 00:22:44.798 lat (usec): min=11671, max=91467, avg=49258.98, stdev=6560.05 00:22:44.798 clat percentiles (usec): 00:22:44.798 | 1.00th=[35914], 5.00th=[44303], 10.00th=[44827], 20.00th=[45351], 00:22:44.798 | 30.00th=[45876], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449], 00:22:44.798 | 70.00th=[47973], 80.00th=[49546], 90.00th=[60556], 95.00th=[63701], 00:22:44.798 | 99.00th=[67634], 99.50th=[69731], 99.90th=[81265], 99.95th=[83362], 00:22:44.798 | 99.99th=[89654] 00:22:44.798 bw ( KiB/s): min=281088, max=350720, per=8.11%, avg=332108.80, stdev=22343.68, samples=20 00:22:44.798 iops : min= 1098, max= 1370, avg=1297.30, stdev=87.28, samples=20 00:22:44.798 lat (msec) : 20=0.28%, 50=80.87%, 100=18.85% 00:22:44.798 cpu : usr=0.42%, sys=5.80%, ctx=2732, majf=0, minf=4097 00:22:44.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=13036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job4: (groupid=0, jobs=1): err= 0: pid=1747870: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1382, BW=346MiB/s (362MB/s)(3471MiB/10044msec) 00:22:44.798 slat (usec): min=13, max=24153, avg=710.09, stdev=1779.40 00:22:44.798 clat (usec): min=11774, max=89713, avg=45535.00, stdev=9268.35 00:22:44.798 lat (usec): min=12019, max=89757, avg=46245.09, stdev=9509.16 00:22:44.798 clat percentiles (usec): 00:22:44.798 | 1.00th=[17957], 5.00th=[30278], 10.00th=[31851], 20.00th=[40633], 00:22:44.798 | 30.00th=[45351], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449], 00:22:44.798 | 70.00th=[47973], 80.00th=[49546], 90.00th=[53216], 95.00th=[63177], 00:22:44.798 | 99.00th=[66847], 99.50th=[69731], 99.90th=[83362], 99.95th=[87557], 00:22:44.798 | 99.99th=[89654] 00:22:44.798 bw ( KiB/s): min=259072, max=568320, per=8.64%, avg=353813.70, stdev=68234.55, samples=20 00:22:44.798 iops : min= 1012, max= 2220, avg=1382.05, stdev=266.57, samples=20 00:22:44.798 lat (msec) : 20=1.37%, 50=81.92%, 100=16.72% 00:22:44.798 cpu : usr=0.49%, sys=6.17%, ctx=2737, majf=0, minf=4097 00:22:44.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=13885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job5: (groupid=0, jobs=1): err= 0: pid=1747871: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1396, BW=349MiB/s (366MB/s)(3506MiB/10043msec) 00:22:44.798 slat (usec): min=13, max=13870, avg=708.97, stdev=1654.39 00:22:44.798 clat (usec): min=10340, max=87225, avg=45081.82, stdev=6517.13 00:22:44.798 lat (usec): min=10582, max=87282, avg=45790.79, stdev=6728.81 00:22:44.798 clat percentiles (usec): 00:22:44.798 | 1.00th=[28967], 5.00th=[30540], 10.00th=[32113], 20.00th=[44827], 00:22:44.798 | 30.00th=[45351], 40.00th=[45876], 50.00th=[46400], 60.00th=[46924], 00:22:44.798 | 70.00th=[47449], 80.00th=[47973], 90.00th=[49546], 95.00th=[51643], 00:22:44.798 | 99.00th=[63701], 99.50th=[65799], 99.90th=[83362], 99.95th=[85459], 00:22:44.798 | 99.99th=[87557] 00:22:44.798 bw ( KiB/s): min=317440, max=466944, per=8.73%, avg=357384.45, stdev=39933.98, samples=20 00:22:44.798 iops : min= 1240, max= 1824, avg=1396.00, stdev=156.01, samples=20 00:22:44.798 lat (msec) : 20=0.27%, 50=91.46%, 100=8.27% 00:22:44.798 cpu : usr=0.64%, sys=6.51%, ctx=2708, majf=0, minf=4097 00:22:44.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=14022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job6: (groupid=0, jobs=1): err= 0: pid=1747873: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1314, BW=329MiB/s (345MB/s)(3300MiB/10044msec) 00:22:44.798 slat (usec): min=12, max=16161, avg=738.79, stdev=1822.82 00:22:44.798 clat (usec): min=9420, max=90914, avg=47903.64, stdev=7685.26 00:22:44.798 lat (usec): min=9681, max=90956, avg=48642.43, stdev=7929.72 00:22:44.798 clat percentiles (usec): 00:22:44.798 | 1.00th=[29492], 5.00th=[32375], 10.00th=[43254], 20.00th=[45351], 00:22:44.798 | 30.00th=[45876], 40.00th=[46400], 50.00th=[46924], 60.00th=[47973], 00:22:44.798 | 70.00th=[48497], 80.00th=[50070], 90.00th=[60556], 95.00th=[63177], 00:22:44.798 | 99.00th=[66847], 99.50th=[69731], 99.90th=[74974], 99.95th=[79168], 00:22:44.798 | 99.99th=[90702] 00:22:44.798 bw ( KiB/s): min=266240, max=437611, per=8.22%, avg=336376.55, stdev=36509.89, samples=20 00:22:44.798 iops : min= 1040, max= 1709, avg=1313.95, stdev=142.56, samples=20 00:22:44.798 lat (msec) : 10=0.02%, 20=0.43%, 50=79.56%, 100=19.98% 00:22:44.798 cpu : usr=0.46%, sys=5.88%, ctx=2722, majf=0, minf=4097 00:22:44.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=13201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job7: (groupid=0, jobs=1): err= 0: pid=1747874: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1944, BW=486MiB/s (510MB/s)(4877MiB/10031msec) 00:22:44.798 slat (usec): min=12, max=16221, avg=509.39, stdev=1254.04 00:22:44.798 clat (usec): min=9107, max=65529, avg=32367.89, stdev=9001.70 00:22:44.798 lat (usec): min=9318, max=75447, avg=32877.29, stdev=9180.27 00:22:44.798 clat percentiles (usec): 00:22:44.798 | 1.00th=[13960], 5.00th=[15401], 10.00th=[16057], 20.00th=[29754], 00:22:44.798 | 30.00th=[30278], 40.00th=[31065], 50.00th=[31589], 60.00th=[31851], 00:22:44.798 | 70.00th=[32637], 80.00th=[35390], 90.00th=[46924], 95.00th=[47973], 00:22:44.798 | 99.00th=[52691], 99.50th=[56361], 99.90th=[61080], 99.95th=[61604], 00:22:44.798 | 99.99th=[62129] 00:22:44.798 bw ( KiB/s): min=331776, max=944128, per=12.16%, avg=497817.60, stdev=127340.91, samples=20 00:22:44.798 iops : min= 1296, max= 3688, avg=1944.60, stdev=497.43, samples=20 00:22:44.798 lat (msec) : 10=0.03%, 20=12.30%, 50=85.54%, 100=2.13% 00:22:44.798 cpu : usr=0.68%, sys=7.51%, ctx=3617, majf=0, minf=3221 00:22:44.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=19509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job8: (groupid=0, jobs=1): err= 0: pid=1747875: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1424, BW=356MiB/s (373MB/s)(3576MiB/10041msec) 00:22:44.798 slat (usec): min=11, max=18955, avg=677.33, stdev=1742.24 00:22:44.798 clat (usec): min=10696, max=92860, avg=44212.11, stdev=8171.92 00:22:44.798 lat (usec): min=10936, max=92874, avg=44889.44, stdev=8396.99 00:22:44.798 clat percentiles (usec): 00:22:44.798 | 1.00th=[29230], 5.00th=[30016], 10.00th=[31065], 20.00th=[33424], 00:22:44.798 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46924], 00:22:44.798 | 70.00th=[47449], 80.00th=[48497], 90.00th=[50070], 95.00th=[56886], 00:22:44.798 | 99.00th=[65274], 99.50th=[67634], 99.90th=[73925], 99.95th=[78119], 00:22:44.798 | 99.99th=[80217] 00:22:44.798 bw ( KiB/s): min=282624, max=515584, per=8.90%, avg=364552.35, stdev=66181.05, samples=20 00:22:44.798 iops : min= 1104, max= 2014, avg=1424.00, stdev=258.53, samples=20 00:22:44.798 lat (msec) : 20=0.23%, 50=89.00%, 100=10.77% 00:22:44.798 cpu : usr=0.43%, sys=5.21%, ctx=2943, majf=0, minf=4097 00:22:44.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.798 issued rwts: total=14302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.798 job9: (groupid=0, jobs=1): err= 0: pid=1747876: Fri Jul 26 21:26:17 2024 00:22:44.798 read: IOPS=1759, BW=440MiB/s (461MB/s)(4411MiB/10029msec) 00:22:44.798 slat (usec): min=12, max=13818, avg=555.77, stdev=1346.32 00:22:44.799 clat (usec): min=12107, max=72197, avg=35789.16, stdev=8041.21 00:22:44.799 lat (usec): min=12497, max=74847, avg=36344.93, stdev=8212.90 00:22:44.799 clat percentiles (usec): 00:22:44.799 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[30540], 00:22:44.799 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32375], 00:22:44.799 | 70.00th=[34341], 80.00th=[45351], 90.00th=[47449], 95.00th=[50070], 00:22:44.799 | 99.00th=[62129], 99.50th=[63701], 99.90th=[67634], 99.95th=[69731], 00:22:44.799 | 99.99th=[71828] 00:22:44.799 bw ( KiB/s): min=283648, max=520192, per=10.99%, avg=450022.40, stdev=81774.10, samples=20 00:22:44.799 iops : min= 1108, max= 2032, avg=1757.90, stdev=319.43, samples=20 00:22:44.799 lat (msec) : 20=0.18%, 50=95.01%, 100=4.81% 00:22:44.799 cpu : usr=0.69%, sys=6.64%, ctx=3476, majf=0, minf=4097 00:22:44.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:44.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.799 issued rwts: total=17642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.799 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.799 job10: (groupid=0, jobs=1): err= 0: pid=1747880: Fri Jul 26 21:26:17 2024 00:22:44.799 read: IOPS=1348, BW=337MiB/s (354MB/s)(3388MiB/10045msec) 00:22:44.799 slat (usec): min=12, max=21316, avg=716.23, stdev=1867.38 00:22:44.799 clat (usec): min=12183, max=93376, avg=46673.78, stdev=9676.27 00:22:44.799 lat (usec): min=12440, max=93400, avg=47390.01, stdev=9942.69 00:22:44.799 clat percentiles (usec): 00:22:44.799 | 1.00th=[23725], 5.00th=[30540], 10.00th=[31851], 20.00th=[41681], 00:22:44.799 | 30.00th=[45876], 40.00th=[46400], 50.00th=[47449], 60.00th=[47973], 00:22:44.799 | 70.00th=[49021], 80.00th=[50070], 90.00th=[62129], 95.00th=[63701], 00:22:44.799 | 99.00th=[67634], 99.50th=[71828], 99.90th=[83362], 99.95th=[88605], 00:22:44.799 | 99.99th=[89654] 00:22:44.799 bw ( KiB/s): min=276480, max=483840, per=8.43%, avg=345267.20, stdev=56409.66, samples=20 00:22:44.799 iops : min= 1080, max= 1890, avg=1348.70, stdev=220.35, samples=20 00:22:44.799 lat (msec) : 20=0.52%, 50=78.11%, 100=21.37% 00:22:44.799 cpu : usr=0.53%, sys=5.67%, ctx=2880, majf=0, minf=4097 00:22:44.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:44.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.799 issued rwts: total=13550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.799 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.799 00:22:44.799 Run status group 0 (all jobs): 00:22:44.799 READ: bw=3999MiB/s (4193MB/s), 319MiB/s-486MiB/s (335MB/s-510MB/s), io=39.2GiB (42.1GB), run=10029-10045msec 00:22:44.799 00:22:44.799 Disk stats (read/write): 00:22:44.799 nvme0n1: ios=28901/0, merge=0/0, ticks=1220710/0, in_queue=1220710, util=96.89% 00:22:44.799 nvme10n1: ios=27654/0, merge=0/0, ticks=1220696/0, in_queue=1220696, util=97.11% 00:22:44.799 nvme1n1: ios=25254/0, merge=0/0, ticks=1222137/0, in_queue=1222137, util=97.45% 00:22:44.799 nvme2n1: ios=25678/0, merge=0/0, ticks=1221697/0, in_queue=1221697, util=97.64% 00:22:44.799 nvme3n1: ios=27376/0, merge=0/0, ticks=1221822/0, in_queue=1221822, util=97.72% 00:22:44.799 nvme4n1: ios=27646/0, merge=0/0, ticks=1220892/0, in_queue=1220892, util=98.14% 00:22:44.799 nvme5n1: ios=26005/0, merge=0/0, ticks=1222475/0, in_queue=1222475, util=98.32% 00:22:44.799 nvme6n1: ios=38485/0, merge=0/0, ticks=1221565/0, in_queue=1221565, util=98.47% 00:22:44.799 nvme7n1: ios=28213/0, merge=0/0, ticks=1221018/0, in_queue=1221018, util=98.90% 00:22:44.799 nvme8n1: ios=34750/0, merge=0/0, ticks=1222294/0, in_queue=1222294, util=99.13% 00:22:44.799 nvme9n1: ios=26695/0, merge=0/0, ticks=1221153/0, in_queue=1221153, util=99.30% 00:22:44.799 21:26:17 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:44.799 [global] 00:22:44.799 thread=1 00:22:44.799 invalidate=1 00:22:44.799 rw=randwrite 00:22:44.799 time_based=1 00:22:44.799 runtime=10 00:22:44.799 ioengine=libaio 00:22:44.799 direct=1 00:22:44.799 bs=262144 00:22:44.799 iodepth=64 00:22:44.799 norandommap=1 00:22:44.799 numjobs=1 00:22:44.799 00:22:44.799 [job0] 00:22:44.799 filename=/dev/nvme0n1 00:22:44.799 [job1] 00:22:44.799 filename=/dev/nvme10n1 00:22:44.799 [job2] 00:22:44.799 filename=/dev/nvme1n1 00:22:44.799 [job3] 00:22:44.799 filename=/dev/nvme2n1 00:22:44.799 [job4] 00:22:44.799 filename=/dev/nvme3n1 00:22:44.799 [job5] 00:22:44.799 filename=/dev/nvme4n1 00:22:44.799 [job6] 00:22:44.799 filename=/dev/nvme5n1 00:22:44.799 [job7] 00:22:44.799 filename=/dev/nvme6n1 00:22:44.799 [job8] 00:22:44.799 filename=/dev/nvme7n1 00:22:44.799 [job9] 00:22:44.799 filename=/dev/nvme8n1 00:22:44.799 [job10] 00:22:44.799 filename=/dev/nvme9n1 00:22:44.799 Could not set queue depth (nvme0n1) 00:22:44.799 Could not set queue depth (nvme10n1) 00:22:44.799 Could not set queue depth (nvme1n1) 00:22:44.799 Could not set queue depth (nvme2n1) 00:22:44.799 Could not set queue depth (nvme3n1) 00:22:44.799 Could not set queue depth (nvme4n1) 00:22:44.799 Could not set queue depth (nvme5n1) 00:22:44.799 Could not set queue depth (nvme6n1) 00:22:44.799 Could not set queue depth (nvme7n1) 00:22:44.799 Could not set queue depth (nvme8n1) 00:22:44.799 Could not set queue depth (nvme9n1) 00:22:44.799 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.799 fio-3.35 00:22:44.799 Starting 11 threads 00:22:54.773 00:22:54.773 job0: (groupid=0, jobs=1): err= 0: pid=1749619: Fri Jul 26 21:26:28 2024 00:22:54.773 write: IOPS=1116, BW=279MiB/s (293MB/s)(2806MiB/10053msec); 0 zone resets 00:22:54.773 slat (usec): min=23, max=25716, avg=870.65, stdev=1779.92 00:22:54.773 clat (msec): min=7, max=126, avg=56.44, stdev=12.38 00:22:54.773 lat (msec): min=8, max=126, avg=57.31, stdev=12.62 00:22:54.773 clat percentiles (msec): 00:22:54.773 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 52], 00:22:54.773 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 56], 00:22:54.773 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 71], 95.00th=[ 73], 00:22:54.773 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 111], 99.95th=[ 115], 00:22:54.773 | 99.99th=[ 116] 00:22:54.773 bw ( KiB/s): min=222208, max=368640, per=8.03%, avg=285696.00, stdev=40878.07, samples=20 00:22:54.773 iops : min= 868, max= 1440, avg=1116.00, stdev=159.68, samples=20 00:22:54.773 lat (msec) : 10=0.08%, 20=0.94%, 50=13.97%, 100=84.82%, 250=0.19% 00:22:54.773 cpu : usr=2.66%, sys=4.56%, ctx=2783, majf=0, minf=1 00:22:54.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:54.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.773 issued rwts: total=0,11223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.773 job1: (groupid=0, jobs=1): err= 0: pid=1749632: Fri Jul 26 21:26:28 2024 00:22:54.773 write: IOPS=1509, BW=377MiB/s (396MB/s)(3794MiB/10053msec); 0 zone resets 00:22:54.773 slat (usec): min=20, max=13599, avg=655.39, stdev=1432.36 00:22:54.773 clat (msec): min=8, max=122, avg=41.72, stdev=18.88 00:22:54.773 lat (msec): min=8, max=122, avg=42.38, stdev=19.19 00:22:54.773 clat percentiles (msec): 00:22:54.773 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:22:54.773 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 38], 60.00th=[ 52], 00:22:54.773 | 70.00th=[ 53], 80.00th=[ 56], 90.00th=[ 70], 95.00th=[ 71], 00:22:54.773 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 106], 99.95th=[ 111], 00:22:54.773 | 99.99th=[ 123] 00:22:54.773 bw ( KiB/s): min=212992, max=889344, per=10.88%, avg=386918.40, stdev=196784.00, samples=20 00:22:54.773 iops : min= 832, max= 3474, avg=1511.40, stdev=768.69, samples=20 00:22:54.773 lat (msec) : 10=0.03%, 20=27.20%, 50=30.29%, 100=42.36%, 250=0.12% 00:22:54.773 cpu : usr=2.98%, sys=4.97%, ctx=3494, majf=0, minf=1 00:22:54.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:54.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.773 issued rwts: total=0,15177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.773 job2: (groupid=0, jobs=1): err= 0: pid=1749643: Fri Jul 26 21:26:28 2024 00:22:54.773 write: IOPS=1653, BW=413MiB/s (433MB/s)(4148MiB/10033msec); 0 zone resets 00:22:54.773 slat (usec): min=22, max=7406, avg=599.20, stdev=1080.78 00:22:54.773 clat (usec): min=4555, max=69590, avg=38088.96, stdev=5184.41 00:22:54.773 lat (usec): min=4607, max=69618, avg=38688.16, stdev=5195.77 00:22:54.773 clat percentiles (usec): 00:22:54.773 | 1.00th=[32900], 5.00th=[33817], 10.00th=[34866], 20.00th=[35390], 00:22:54.773 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36963], 60.00th=[37487], 00:22:54.773 | 70.00th=[37487], 80.00th=[38011], 90.00th=[39584], 95.00th=[53216], 00:22:54.773 | 99.00th=[56361], 99.50th=[57410], 99.90th=[60556], 99.95th=[64226], 00:22:54.773 | 99.99th=[69731] 00:22:54.773 bw ( KiB/s): min=299520, max=446464, per=11.90%, avg=423116.80, stdev=42015.34, samples=20 00:22:54.773 iops : min= 1170, max= 1744, avg=1652.80, stdev=164.12, samples=20 00:22:54.773 lat (msec) : 10=0.04%, 20=0.11%, 50=91.69%, 100=8.15% 00:22:54.773 cpu : usr=3.19%, sys=5.59%, ctx=4105, majf=0, minf=1 00:22:54.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:54.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.773 issued rwts: total=0,16591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.773 job3: (groupid=0, jobs=1): err= 0: pid=1749649: Fri Jul 26 21:26:28 2024 00:22:54.773 write: IOPS=974, BW=244MiB/s (255MB/s)(2446MiB/10043msec); 0 zone resets 00:22:54.773 slat (usec): min=29, max=12222, avg=1011.25, stdev=1772.72 00:22:54.773 clat (usec): min=16749, max=96636, avg=64665.91, stdev=9894.81 00:22:54.773 lat (usec): min=16781, max=96696, avg=65677.16, stdev=9950.28 00:22:54.773 clat percentiles (usec): 00:22:54.773 | 1.00th=[51119], 5.00th=[52691], 10.00th=[53740], 20.00th=[55313], 00:22:54.773 | 30.00th=[55837], 40.00th=[56886], 50.00th=[60556], 60.00th=[71828], 00:22:54.773 | 70.00th=[73925], 80.00th=[74974], 90.00th=[76022], 95.00th=[77071], 00:22:54.773 | 99.00th=[79168], 99.50th=[80217], 99.90th=[88605], 99.95th=[91751], 00:22:54.773 | 99.99th=[96994] 00:22:54.773 bw ( KiB/s): min=212992, max=294912, per=7.00%, avg=248832.00, stdev=36106.18, samples=20 00:22:54.773 iops : min= 832, max= 1152, avg=972.00, stdev=141.04, samples=20 00:22:54.773 lat (msec) : 20=0.08%, 50=0.54%, 100=99.38% 00:22:54.773 cpu : usr=2.53%, sys=4.34%, ctx=2469, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.774 issued rwts: total=0,9783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.774 job4: (groupid=0, jobs=1): err= 0: pid=1749653: Fri Jul 26 21:26:28 2024 00:22:54.774 write: IOPS=1654, BW=414MiB/s (434MB/s)(4150MiB/10033msec); 0 zone resets 00:22:54.774 slat (usec): min=21, max=7649, avg=593.79, stdev=1081.98 00:22:54.774 clat (usec): min=9585, max=86686, avg=38075.40, stdev=5502.07 00:22:54.774 lat (usec): min=9633, max=86742, avg=38669.19, stdev=5518.69 00:22:54.774 clat percentiles (usec): 00:22:54.774 | 1.00th=[30540], 5.00th=[33817], 10.00th=[34866], 20.00th=[35390], 00:22:54.774 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36963], 60.00th=[37487], 00:22:54.774 | 70.00th=[37487], 80.00th=[38011], 90.00th=[40109], 95.00th=[53216], 00:22:54.774 | 99.00th=[56361], 99.50th=[57410], 99.90th=[76022], 99.95th=[80217], 00:22:54.774 | 99.99th=[85459] 00:22:54.774 bw ( KiB/s): min=301056, max=450048, per=11.90%, avg=423321.60, stdev=41957.57, samples=20 00:22:54.774 iops : min= 1176, max= 1758, avg=1653.60, stdev=163.90, samples=20 00:22:54.774 lat (msec) : 10=0.03%, 20=0.11%, 50=91.53%, 100=8.33% 00:22:54.774 cpu : usr=3.18%, sys=5.20%, ctx=4118, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.774 issued rwts: total=0,16599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.774 job5: (groupid=0, jobs=1): err= 0: pid=1749671: Fri Jul 26 21:26:28 2024 00:22:54.774 write: IOPS=1126, BW=282MiB/s (295MB/s)(2828MiB/10043msec); 0 zone resets 00:22:54.774 slat (usec): min=22, max=13033, avg=860.33, stdev=1589.01 00:22:54.774 clat (usec): min=5004, max=96843, avg=55930.85, stdev=13067.19 00:22:54.774 lat (msec): min=5, max=100, avg=56.79, stdev=13.25 00:22:54.774 clat percentiles (usec): 00:22:54.774 | 1.00th=[29754], 5.00th=[34866], 10.00th=[36439], 20.00th=[48497], 00:22:54.774 | 30.00th=[53216], 40.00th=[54264], 50.00th=[55313], 60.00th=[56361], 00:22:54.774 | 70.00th=[57934], 80.00th=[69731], 90.00th=[74974], 95.00th=[76022], 00:22:54.774 | 99.00th=[83362], 99.50th=[86508], 99.90th=[91751], 99.95th=[92799], 00:22:54.774 | 99.99th=[96994] 00:22:54.774 bw ( KiB/s): min=210944, max=444928, per=8.10%, avg=288000.00, stdev=67400.32, samples=20 00:22:54.774 iops : min= 824, max= 1738, avg=1125.00, stdev=263.28, samples=20 00:22:54.774 lat (msec) : 10=0.06%, 20=0.12%, 50=20.27%, 100=79.55% 00:22:54.774 cpu : usr=2.91%, sys=4.36%, ctx=2881, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.774 issued rwts: total=0,11313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.774 job6: (groupid=0, jobs=1): err= 0: pid=1749678: Fri Jul 26 21:26:28 2024 00:22:54.774 write: IOPS=937, BW=234MiB/s (246MB/s)(2358MiB/10056msec); 0 zone resets 00:22:54.774 slat (usec): min=26, max=13030, avg=1048.23, stdev=1951.82 00:22:54.774 clat (msec): min=2, max=122, avg=67.16, stdev=10.50 00:22:54.774 lat (msec): min=2, max=122, avg=68.21, stdev=10.61 00:22:54.774 clat percentiles (msec): 00:22:54.774 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:22:54.774 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 73], 00:22:54.774 | 70.00th=[ 75], 80.00th=[ 75], 90.00th=[ 77], 95.00th=[ 78], 00:22:54.774 | 99.00th=[ 81], 99.50th=[ 82], 99.90th=[ 115], 99.95th=[ 123], 00:22:54.774 | 99.99th=[ 123] 00:22:54.774 bw ( KiB/s): min=212992, max=309248, per=6.74%, avg=239823.75, stdev=33914.01, samples=20 00:22:54.774 iops : min= 832, max= 1208, avg=936.80, stdev=132.48, samples=20 00:22:54.774 lat (msec) : 4=0.03%, 10=0.17%, 20=0.08%, 50=2.25%, 100=97.25% 00:22:54.774 lat (msec) : 250=0.21% 00:22:54.774 cpu : usr=2.37%, sys=4.04%, ctx=2369, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.774 issued rwts: total=0,9432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.774 job7: (groupid=0, jobs=1): err= 0: pid=1749683: Fri Jul 26 21:26:28 2024 00:22:54.774 write: IOPS=1247, BW=312MiB/s (327MB/s)(3130MiB/10031msec); 0 zone resets 00:22:54.774 slat (usec): min=22, max=15668, avg=781.64, stdev=1548.14 00:22:54.774 clat (usec): min=9190, max=98565, avg=50482.54, stdev=11881.53 00:22:54.774 lat (usec): min=9268, max=99653, avg=51264.18, stdev=12084.81 00:22:54.774 clat percentiles (usec): 00:22:54.774 | 1.00th=[31327], 5.00th=[34341], 10.00th=[35390], 20.00th=[36963], 00:22:54.774 | 30.00th=[39584], 40.00th=[51119], 50.00th=[52691], 60.00th=[53216], 00:22:54.774 | 70.00th=[54264], 80.00th=[56361], 90.00th=[68682], 95.00th=[70779], 00:22:54.774 | 99.00th=[81265], 99.50th=[85459], 99.90th=[88605], 99.95th=[92799], 00:22:54.774 | 99.99th=[98042] 00:22:54.774 bw ( KiB/s): min=212992, max=450048, per=8.96%, avg=318848.00, stdev=70896.92, samples=20 00:22:54.774 iops : min= 832, max= 1758, avg=1245.50, stdev=276.94, samples=20 00:22:54.774 lat (msec) : 10=0.02%, 20=0.35%, 50=33.20%, 100=66.43% 00:22:54.774 cpu : usr=2.91%, sys=4.60%, ctx=3061, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.774 issued rwts: total=0,12518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.774 job8: (groupid=0, jobs=1): err= 0: pid=1749699: Fri Jul 26 21:26:28 2024 00:22:54.774 write: IOPS=973, BW=243MiB/s (255MB/s)(2445MiB/10043msec); 0 zone resets 00:22:54.774 slat (usec): min=25, max=12263, avg=1017.01, stdev=1748.88 00:22:54.774 clat (usec): min=16869, max=96764, avg=64694.55, stdev=9863.54 00:22:54.774 lat (usec): min=16906, max=96821, avg=65711.57, stdev=9924.36 00:22:54.774 clat percentiles (usec): 00:22:54.774 | 1.00th=[51119], 5.00th=[52691], 10.00th=[53740], 20.00th=[55313], 00:22:54.774 | 30.00th=[55837], 40.00th=[56886], 50.00th=[60556], 60.00th=[71828], 00:22:54.774 | 70.00th=[73925], 80.00th=[74974], 90.00th=[76022], 95.00th=[77071], 00:22:54.774 | 99.00th=[79168], 99.50th=[80217], 99.90th=[88605], 99.95th=[91751], 00:22:54.774 | 99.99th=[96994] 00:22:54.774 bw ( KiB/s): min=213504, max=292352, per=6.99%, avg=248704.00, stdev=35768.66, samples=20 00:22:54.774 iops : min= 834, max= 1142, avg=971.50, stdev=139.72, samples=20 00:22:54.774 lat (msec) : 20=0.08%, 50=0.41%, 100=99.51% 00:22:54.774 cpu : usr=2.81%, sys=4.34%, ctx=2458, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.774 issued rwts: total=0,9778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.774 job9: (groupid=0, jobs=1): err= 0: pid=1749711: Fri Jul 26 21:26:28 2024 00:22:54.774 write: IOPS=1021, BW=255MiB/s (268MB/s)(2568MiB/10051msec); 0 zone resets 00:22:54.774 slat (usec): min=22, max=23561, avg=954.96, stdev=1932.32 00:22:54.774 clat (msec): min=22, max=122, avg=61.64, stdev=14.55 00:22:54.774 lat (msec): min=22, max=122, avg=62.60, stdev=14.76 00:22:54.774 clat percentiles (msec): 00:22:54.774 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 52], 00:22:54.774 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 69], 60.00th=[ 70], 00:22:54.774 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 78], 00:22:54.774 | 99.00th=[ 86], 99.50th=[ 89], 99.90th=[ 112], 99.95th=[ 118], 00:22:54.774 | 99.99th=[ 121] 00:22:54.774 bw ( KiB/s): min=210944, max=449536, per=7.35%, avg=261350.40, stdev=65114.53, samples=20 00:22:54.774 iops : min= 824, max= 1756, avg=1020.90, stdev=254.35, samples=20 00:22:54.774 lat (msec) : 50=18.25%, 100=81.53%, 250=0.21% 00:22:54.774 cpu : usr=2.29%, sys=3.90%, ctx=2573, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.774 issued rwts: total=0,10272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.774 job10: (groupid=0, jobs=1): err= 0: pid=1749720: Fri Jul 26 21:26:28 2024 00:22:54.774 write: IOPS=1697, BW=424MiB/s (445MB/s)(4258MiB/10031msec); 0 zone resets 00:22:54.774 slat (usec): min=17, max=11037, avg=584.45, stdev=1211.18 00:22:54.774 clat (usec): min=4452, max=79088, avg=37098.32, stdev=16693.89 00:22:54.774 lat (usec): min=4501, max=80221, avg=37682.78, stdev=16953.97 00:22:54.774 clat percentiles (usec): 00:22:54.774 | 1.00th=[16450], 5.00th=[17171], 10.00th=[17433], 20.00th=[18220], 00:22:54.774 | 30.00th=[18744], 40.00th=[34866], 50.00th=[36439], 60.00th=[38011], 00:22:54.774 | 70.00th=[51643], 80.00th=[53740], 90.00th=[55837], 95.00th=[67634], 00:22:54.774 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[77071], 00:22:54.774 | 99.99th=[79168] 00:22:54.774 bw ( KiB/s): min=230912, max=898560, per=12.21%, avg=434376.90, stdev=214551.50, samples=20 00:22:54.774 iops : min= 902, max= 3510, avg=1696.75, stdev=838.12, samples=20 00:22:54.774 lat (msec) : 10=0.04%, 20=33.92%, 50=31.36%, 100=34.68% 00:22:54.774 cpu : usr=3.03%, sys=4.78%, ctx=3834, majf=0, minf=1 00:22:54.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:54.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.775 issued rwts: total=0,17032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.775 00:22:54.775 Run status group 0 (all jobs): 00:22:54.775 WRITE: bw=3473MiB/s (3642MB/s), 234MiB/s-424MiB/s (246MB/s-445MB/s), io=34.1GiB (36.6GB), run=10031-10056msec 00:22:54.775 00:22:54.775 Disk stats (read/write): 00:22:54.775 nvme0n1: ios=49/22074, merge=0/0, ticks=17/1213944, in_queue=1213961, util=96.54% 00:22:54.775 nvme10n1: ios=0/29970, merge=0/0, ticks=0/1216170, in_queue=1216170, util=96.70% 00:22:54.775 nvme1n1: ios=0/32587, merge=0/0, ticks=0/1217945, in_queue=1217945, util=97.03% 00:22:54.775 nvme2n1: ios=0/19119, merge=0/0, ticks=0/1214614, in_queue=1214614, util=97.22% 00:22:54.775 nvme3n1: ios=0/32600, merge=0/0, ticks=0/1218630, in_queue=1218630, util=97.32% 00:22:54.775 nvme4n1: ios=0/22178, merge=0/0, ticks=0/1213868, in_queue=1213868, util=97.73% 00:22:54.775 nvme5n1: ios=0/18496, merge=0/0, ticks=0/1214839, in_queue=1214839, util=97.95% 00:22:54.775 nvme6n1: ios=0/24429, merge=0/0, ticks=0/1217053, in_queue=1217053, util=98.07% 00:22:54.775 nvme7n1: ios=0/19108, merge=0/0, ticks=0/1212707, in_queue=1212707, util=98.57% 00:22:54.775 nvme8n1: ios=0/20169, merge=0/0, ticks=0/1215061, in_queue=1215061, util=98.82% 00:22:54.775 nvme9n1: ios=0/33456, merge=0/0, ticks=0/1219263, in_queue=1219263, util=98.98% 00:22:54.775 21:26:28 -- target/multiconnection.sh@36 -- # sync 00:22:54.775 21:26:28 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:54.775 21:26:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.775 21:26:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:55.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:55.033 21:26:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:55.033 21:26:29 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.033 21:26:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.033 21:26:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:55.033 21:26:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.033 21:26:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:55.033 21:26:29 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.033 21:26:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:55.033 21:26:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.033 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:22:55.033 21:26:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.033 21:26:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.033 21:26:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:55.967 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:55.967 21:26:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:55.967 21:26:30 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.967 21:26:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.967 21:26:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:55.967 21:26:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:55.967 21:26:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.967 21:26:30 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.967 21:26:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:55.967 21:26:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.967 21:26:30 -- common/autotest_common.sh@10 -- # set +x 00:22:55.967 21:26:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.967 21:26:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.967 21:26:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:56.902 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:56.902 21:26:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:56.902 21:26:31 -- common/autotest_common.sh@1198 -- # local i=0 00:22:56.902 21:26:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:56.902 21:26:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:56.902 21:26:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:56.902 21:26:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:56.902 21:26:31 -- common/autotest_common.sh@1210 -- # return 0 00:22:56.902 21:26:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:56.902 21:26:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.902 21:26:31 -- common/autotest_common.sh@10 -- # set +x 00:22:56.902 21:26:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.902 21:26:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:56.902 21:26:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:57.834 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:57.835 21:26:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:57.835 21:26:32 -- common/autotest_common.sh@1198 -- # local i=0 00:22:57.835 21:26:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:57.835 21:26:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:57.835 21:26:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:57.835 21:26:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:58.092 21:26:32 -- common/autotest_common.sh@1210 -- # return 0 00:22:58.092 21:26:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:58.092 21:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.092 21:26:32 -- common/autotest_common.sh@10 -- # set +x 00:22:58.092 21:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.092 21:26:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:58.092 21:26:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:59.024 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:59.024 21:26:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:59.024 21:26:33 -- common/autotest_common.sh@1198 -- # local i=0 00:22:59.024 21:26:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:59.024 21:26:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:59.024 21:26:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:59.024 21:26:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:59.024 21:26:33 -- common/autotest_common.sh@1210 -- # return 0 00:22:59.024 21:26:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:59.024 21:26:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.024 21:26:33 -- common/autotest_common.sh@10 -- # set +x 00:22:59.024 21:26:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.024 21:26:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.024 21:26:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:59.959 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:59.959 21:26:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:59.959 21:26:34 -- common/autotest_common.sh@1198 -- # local i=0 00:22:59.959 21:26:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:59.959 21:26:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:59.959 21:26:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:59.959 21:26:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:59.959 21:26:34 -- common/autotest_common.sh@1210 -- # return 0 00:22:59.959 21:26:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:59.959 21:26:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.959 21:26:34 -- common/autotest_common.sh@10 -- # set +x 00:22:59.959 21:26:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.959 21:26:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.959 21:26:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:00.894 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:00.894 21:26:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:00.894 21:26:35 -- common/autotest_common.sh@1198 -- # local i=0 00:23:00.894 21:26:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:00.894 21:26:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:00.894 21:26:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:00.894 21:26:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:00.894 21:26:35 -- common/autotest_common.sh@1210 -- # return 0 00:23:00.894 21:26:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:00.894 21:26:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.894 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:23:00.894 21:26:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.894 21:26:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.894 21:26:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:01.829 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:01.829 21:26:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:01.829 21:26:36 -- common/autotest_common.sh@1198 -- # local i=0 00:23:01.829 21:26:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:01.829 21:26:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:01.829 21:26:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:01.829 21:26:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:01.829 21:26:36 -- common/autotest_common.sh@1210 -- # return 0 00:23:01.829 21:26:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:01.829 21:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.829 21:26:36 -- common/autotest_common.sh@10 -- # set +x 00:23:01.829 21:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.829 21:26:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.829 21:26:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:03.203 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:03.203 21:26:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:03.203 21:26:37 -- common/autotest_common.sh@1198 -- # local i=0 00:23:03.204 21:26:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:03.204 21:26:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:03.204 21:26:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:03.204 21:26:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:03.204 21:26:37 -- common/autotest_common.sh@1210 -- # return 0 00:23:03.204 21:26:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:03.204 21:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.204 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:23:03.204 21:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.204 21:26:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:03.204 21:26:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:03.770 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:03.770 21:26:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:03.770 21:26:38 -- common/autotest_common.sh@1198 -- # local i=0 00:23:03.770 21:26:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:03.770 21:26:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:04.028 21:26:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:04.028 21:26:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:04.028 21:26:38 -- common/autotest_common.sh@1210 -- # return 0 00:23:04.028 21:26:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:04.028 21:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.028 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:23:04.028 21:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.028 21:26:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:04.028 21:26:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:04.964 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:04.964 21:26:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:04.964 21:26:39 -- common/autotest_common.sh@1198 -- # local i=0 00:23:04.964 21:26:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:04.964 21:26:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:04.964 21:26:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:04.964 21:26:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:04.964 21:26:39 -- common/autotest_common.sh@1210 -- # return 0 00:23:04.964 21:26:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:04.964 21:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.964 21:26:39 -- common/autotest_common.sh@10 -- # set +x 00:23:04.964 21:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.964 21:26:39 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:04.964 21:26:39 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:04.964 21:26:39 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:04.964 21:26:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:04.964 21:26:39 -- nvmf/common.sh@116 -- # sync 00:23:04.964 21:26:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:04.964 21:26:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:04.964 21:26:39 -- nvmf/common.sh@119 -- # set +e 00:23:04.964 21:26:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:04.964 21:26:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:04.964 rmmod nvme_rdma 00:23:04.964 rmmod nvme_fabrics 00:23:04.964 21:26:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:04.964 21:26:39 -- nvmf/common.sh@123 -- # set -e 00:23:04.964 21:26:39 -- nvmf/common.sh@124 -- # return 0 00:23:04.964 21:26:39 -- nvmf/common.sh@477 -- # '[' -n 1741530 ']' 00:23:04.964 21:26:39 -- nvmf/common.sh@478 -- # killprocess 1741530 00:23:04.964 21:26:39 -- common/autotest_common.sh@926 -- # '[' -z 1741530 ']' 00:23:04.964 21:26:39 -- common/autotest_common.sh@930 -- # kill -0 1741530 00:23:04.964 21:26:39 -- common/autotest_common.sh@931 -- # uname 00:23:04.964 21:26:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:04.964 21:26:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1741530 00:23:04.964 21:26:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:04.964 21:26:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:04.964 21:26:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1741530' 00:23:04.964 killing process with pid 1741530 00:23:04.964 21:26:39 -- common/autotest_common.sh@945 -- # kill 1741530 00:23:04.964 21:26:39 -- common/autotest_common.sh@950 -- # wait 1741530 00:23:05.532 21:26:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:05.532 21:26:40 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:05.532 00:23:05.532 real 1m16.794s 00:23:05.532 user 4m53.543s 00:23:05.532 sys 0m21.957s 00:23:05.532 21:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.532 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:23:05.532 ************************************ 00:23:05.532 END TEST nvmf_multiconnection 00:23:05.532 ************************************ 00:23:05.532 21:26:40 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:05.532 21:26:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:05.533 21:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:05.533 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:23:05.533 ************************************ 00:23:05.533 START TEST nvmf_initiator_timeout 00:23:05.533 ************************************ 00:23:05.533 21:26:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:05.533 * Looking for test storage... 00:23:05.533 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:05.533 21:26:40 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.533 21:26:40 -- nvmf/common.sh@7 -- # uname -s 00:23:05.533 21:26:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.533 21:26:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.533 21:26:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.533 21:26:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.533 21:26:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.533 21:26:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.533 21:26:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.533 21:26:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.533 21:26:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.533 21:26:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.533 21:26:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:05.533 21:26:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:05.533 21:26:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.533 21:26:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.533 21:26:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.533 21:26:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:05.819 21:26:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.819 21:26:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.819 21:26:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.819 21:26:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.819 21:26:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.819 21:26:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.819 21:26:40 -- paths/export.sh@5 -- # export PATH 00:23:05.819 21:26:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.819 21:26:40 -- nvmf/common.sh@46 -- # : 0 00:23:05.819 21:26:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:05.819 21:26:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:05.819 21:26:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:05.819 21:26:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.819 21:26:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.819 21:26:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:05.819 21:26:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:05.819 21:26:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:05.819 21:26:40 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.819 21:26:40 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.819 21:26:40 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:05.819 21:26:40 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:05.819 21:26:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.819 21:26:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:05.819 21:26:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:05.819 21:26:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:05.819 21:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.819 21:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.819 21:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.819 21:26:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:05.819 21:26:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:05.819 21:26:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:05.819 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:23:13.955 21:26:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:13.955 21:26:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:13.955 21:26:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:13.955 21:26:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:13.955 21:26:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:13.955 21:26:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:13.955 21:26:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:13.955 21:26:48 -- nvmf/common.sh@294 -- # net_devs=() 00:23:13.955 21:26:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:13.955 21:26:48 -- nvmf/common.sh@295 -- # e810=() 00:23:13.955 21:26:48 -- nvmf/common.sh@295 -- # local -ga e810 00:23:13.955 21:26:48 -- nvmf/common.sh@296 -- # x722=() 00:23:13.955 21:26:48 -- nvmf/common.sh@296 -- # local -ga x722 00:23:13.955 21:26:48 -- nvmf/common.sh@297 -- # mlx=() 00:23:13.955 21:26:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:13.955 21:26:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.955 21:26:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:13.955 21:26:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:13.955 21:26:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:13.955 21:26:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:13.955 21:26:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:13.955 21:26:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:13.955 21:26:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:13.955 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:13.955 21:26:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:13.955 21:26:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:13.955 21:26:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:13.955 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:13.955 21:26:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:13.955 21:26:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:13.955 21:26:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:13.955 21:26:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.955 21:26:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:13.955 21:26:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.955 21:26:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:13.955 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:13.955 21:26:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.955 21:26:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:13.955 21:26:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.955 21:26:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:13.955 21:26:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.955 21:26:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:13.955 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:13.955 21:26:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.955 21:26:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:13.955 21:26:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:13.955 21:26:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:13.955 21:26:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:13.955 21:26:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:13.955 21:26:48 -- nvmf/common.sh@57 -- # uname 00:23:13.955 21:26:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:13.955 21:26:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:13.955 21:26:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:13.955 21:26:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:13.955 21:26:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:13.955 21:26:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:13.955 21:26:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:13.955 21:26:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:13.955 21:26:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:13.955 21:26:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:13.955 21:26:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:13.955 21:26:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:13.955 21:26:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:13.955 21:26:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:13.955 21:26:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:13.955 21:26:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:13.955 21:26:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:13.955 21:26:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.955 21:26:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@104 -- # continue 2 00:23:13.956 21:26:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@104 -- # continue 2 00:23:13.956 21:26:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:13.956 21:26:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:13.956 21:26:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:13.956 21:26:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:13.956 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:13.956 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:13.956 altname enp217s0f0np0 00:23:13.956 altname ens818f0np0 00:23:13.956 inet 192.168.100.8/24 scope global mlx_0_0 00:23:13.956 valid_lft forever preferred_lft forever 00:23:13.956 21:26:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:13.956 21:26:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:13.956 21:26:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:13.956 21:26:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:13.956 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:13.956 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:13.956 altname enp217s0f1np1 00:23:13.956 altname ens818f1np1 00:23:13.956 inet 192.168.100.9/24 scope global mlx_0_1 00:23:13.956 valid_lft forever preferred_lft forever 00:23:13.956 21:26:48 -- nvmf/common.sh@410 -- # return 0 00:23:13.956 21:26:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:13.956 21:26:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:13.956 21:26:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:13.956 21:26:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:13.956 21:26:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:13.956 21:26:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:13.956 21:26:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:13.956 21:26:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:13.956 21:26:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:13.956 21:26:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@104 -- # continue 2 00:23:13.956 21:26:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.956 21:26:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:13.956 21:26:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@104 -- # continue 2 00:23:13.956 21:26:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:13.956 21:26:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:13.956 21:26:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:13.956 21:26:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:13.956 21:26:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:13.956 21:26:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:13.956 192.168.100.9' 00:23:13.956 21:26:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:13.956 192.168.100.9' 00:23:13.956 21:26:48 -- nvmf/common.sh@445 -- # head -n 1 00:23:13.956 21:26:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:13.956 21:26:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:13.956 192.168.100.9' 00:23:13.956 21:26:48 -- nvmf/common.sh@446 -- # tail -n +2 00:23:13.956 21:26:48 -- nvmf/common.sh@446 -- # head -n 1 00:23:13.956 21:26:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:13.956 21:26:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:13.956 21:26:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:13.956 21:26:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:13.956 21:26:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:13.956 21:26:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:13.956 21:26:48 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:13.956 21:26:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:13.956 21:26:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:13.956 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:23:13.956 21:26:48 -- nvmf/common.sh@469 -- # nvmfpid=1757209 00:23:13.956 21:26:48 -- nvmf/common.sh@470 -- # waitforlisten 1757209 00:23:13.956 21:26:48 -- common/autotest_common.sh@819 -- # '[' -z 1757209 ']' 00:23:13.956 21:26:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.956 21:26:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:13.956 21:26:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.956 21:26:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:13.956 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:23:13.956 21:26:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:13.956 [2024-07-26 21:26:48.445919] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:23:13.956 [2024-07-26 21:26:48.445970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.956 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.956 [2024-07-26 21:26:48.534290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.956 [2024-07-26 21:26:48.573166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:13.956 [2024-07-26 21:26:48.573273] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.956 [2024-07-26 21:26:48.573284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.956 [2024-07-26 21:26:48.573293] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.956 [2024-07-26 21:26:48.573337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.956 [2024-07-26 21:26:48.573432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.956 [2024-07-26 21:26:48.573451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.956 [2024-07-26 21:26:48.573452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.524 21:26:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:14.525 21:26:49 -- common/autotest_common.sh@852 -- # return 0 00:23:14.525 21:26:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:14.525 21:26:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:14.525 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:14.525 21:26:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.525 21:26:49 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:14.525 21:26:49 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:14.525 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.525 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:14.525 Malloc0 00:23:14.525 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.525 21:26:49 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:14.525 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.525 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:14.525 Delay0 00:23:14.525 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.525 21:26:49 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:14.525 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.525 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:14.525 [2024-07-26 21:26:49.351300] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae1510/0xb008c0) succeed. 00:23:14.525 [2024-07-26 21:26:49.362260] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae2b00/0xb80900) succeed. 00:23:14.784 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.784 21:26:49 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:14.784 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.784 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:14.784 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.784 21:26:49 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:14.784 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.784 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:14.784 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.784 21:26:49 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:14.784 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.784 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:14.784 [2024-07-26 21:26:49.507368] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:14.784 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.784 21:26:49 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:15.720 21:26:50 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:15.720 21:26:50 -- common/autotest_common.sh@1177 -- # local i=0 00:23:15.720 21:26:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.720 21:26:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:15.720 21:26:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:17.622 21:26:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:17.896 21:26:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:17.896 21:26:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:17.896 21:26:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:17.896 21:26:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.896 21:26:52 -- common/autotest_common.sh@1187 -- # return 0 00:23:17.896 21:26:52 -- target/initiator_timeout.sh@35 -- # fio_pid=1758040 00:23:17.896 21:26:52 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:17.896 21:26:52 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:17.896 [global] 00:23:17.896 thread=1 00:23:17.896 invalidate=1 00:23:17.896 rw=write 00:23:17.896 time_based=1 00:23:17.896 runtime=60 00:23:17.896 ioengine=libaio 00:23:17.896 direct=1 00:23:17.896 bs=4096 00:23:17.896 iodepth=1 00:23:17.896 norandommap=0 00:23:17.896 numjobs=1 00:23:17.896 00:23:17.896 verify_dump=1 00:23:17.896 verify_backlog=512 00:23:17.896 verify_state_save=0 00:23:17.896 do_verify=1 00:23:17.896 verify=crc32c-intel 00:23:17.896 [job0] 00:23:17.896 filename=/dev/nvme0n1 00:23:17.896 Could not set queue depth (nvme0n1) 00:23:18.163 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:18.163 fio-3.35 00:23:18.163 Starting 1 thread 00:23:20.693 21:26:55 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:20.693 21:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.693 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:20.693 true 00:23:20.693 21:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.693 21:26:55 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:20.693 21:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.693 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:20.693 true 00:23:20.693 21:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.693 21:26:55 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:20.693 21:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.693 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:20.693 true 00:23:20.693 21:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.693 21:26:55 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:20.693 21:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.693 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:20.952 true 00:23:20.952 21:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.952 21:26:55 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:24.242 21:26:58 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:24.242 21:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.242 21:26:58 -- common/autotest_common.sh@10 -- # set +x 00:23:24.242 true 00:23:24.242 21:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.242 21:26:58 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:24.242 21:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.242 21:26:58 -- common/autotest_common.sh@10 -- # set +x 00:23:24.242 true 00:23:24.242 21:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.242 21:26:58 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:24.242 21:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.242 21:26:58 -- common/autotest_common.sh@10 -- # set +x 00:23:24.242 true 00:23:24.242 21:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.242 21:26:58 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:24.242 21:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.242 21:26:58 -- common/autotest_common.sh@10 -- # set +x 00:23:24.242 true 00:23:24.242 21:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.242 21:26:58 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:24.242 21:26:58 -- target/initiator_timeout.sh@54 -- # wait 1758040 00:24:20.507 00:24:20.507 job0: (groupid=0, jobs=1): err= 0: pid=1758173: Fri Jul 26 21:27:53 2024 00:24:20.507 read: IOPS=1254, BW=5018KiB/s (5139kB/s)(294MiB/60000msec) 00:24:20.507 slat (nsec): min=8285, max=76126, avg=9266.09, stdev=1050.19 00:24:20.507 clat (usec): min=79, max=440, avg=105.82, stdev= 6.87 00:24:20.507 lat (usec): min=95, max=449, avg=115.08, stdev= 6.93 00:24:20.507 clat percentiles (usec): 00:24:20.508 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 100], 00:24:20.508 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:24:20.508 | 70.00th=[ 110], 80.00th=[ 112], 90.00th=[ 115], 95.00th=[ 118], 00:24:20.508 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 128], 99.95th=[ 135], 00:24:20.508 | 99.99th=[ 277] 00:24:20.508 write: IOPS=1262, BW=5052KiB/s (5173kB/s)(296MiB/60000msec); 0 zone resets 00:24:20.508 slat (usec): min=7, max=10969, avg=11.51, stdev=39.85 00:24:20.508 clat (usec): min=74, max=42331k, avg=661.45, stdev=153778.04 00:24:20.508 lat (usec): min=94, max=42331k, avg=672.96, stdev=153778.04 00:24:20.508 clat percentiles (usec): 00:24:20.508 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 97], 00:24:20.508 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:24:20.508 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 115], 00:24:20.508 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 137], 99.95th=[ 172], 00:24:20.508 | 99.99th=[ 310] 00:24:20.508 bw ( KiB/s): min= 3568, max=19208, per=100.00%, avg=16384.00, stdev=3200.34, samples=36 00:24:20.508 iops : min= 892, max= 4802, avg=4096.00, stdev=800.09, samples=36 00:24:20.508 lat (usec) : 100=26.64%, 250=73.34%, 500=0.02%, 750=0.01% 00:24:20.508 lat (msec) : 2=0.01%, >=2000=0.01% 00:24:20.508 cpu : usr=1.71%, sys=3.48%, ctx=151063, majf=0, minf=143 00:24:20.508 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.508 issued rwts: total=75275,75776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.508 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:20.508 00:24:20.508 Run status group 0 (all jobs): 00:24:20.508 READ: bw=5018KiB/s (5139kB/s), 5018KiB/s-5018KiB/s (5139kB/s-5139kB/s), io=294MiB (308MB), run=60000-60000msec 00:24:20.508 WRITE: bw=5052KiB/s (5173kB/s), 5052KiB/s-5052KiB/s (5173kB/s-5173kB/s), io=296MiB (310MB), run=60000-60000msec 00:24:20.508 00:24:20.508 Disk stats (read/write): 00:24:20.508 nvme0n1: ios=75223/75264, merge=0/0, ticks=7219/7101, in_queue=14320, util=99.90% 00:24:20.508 21:27:53 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:20.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:20.508 21:27:53 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:20.508 21:27:53 -- common/autotest_common.sh@1198 -- # local i=0 00:24:20.508 21:27:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:20.508 21:27:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:20.508 21:27:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:20.508 21:27:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:20.508 21:27:54 -- common/autotest_common.sh@1210 -- # return 0 00:24:20.508 21:27:54 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:20.508 21:27:54 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:20.508 nvmf hotplug test: fio successful as expected 00:24:20.508 21:27:54 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.508 21:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.508 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:24:20.508 21:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.508 21:27:54 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:20.508 21:27:54 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:20.508 21:27:54 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:20.508 21:27:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:20.508 21:27:54 -- nvmf/common.sh@116 -- # sync 00:24:20.508 21:27:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:20.508 21:27:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:20.508 21:27:54 -- nvmf/common.sh@119 -- # set +e 00:24:20.508 21:27:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:20.508 21:27:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:20.508 rmmod nvme_rdma 00:24:20.508 rmmod nvme_fabrics 00:24:20.508 21:27:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:20.508 21:27:54 -- nvmf/common.sh@123 -- # set -e 00:24:20.508 21:27:54 -- nvmf/common.sh@124 -- # return 0 00:24:20.508 21:27:54 -- nvmf/common.sh@477 -- # '[' -n 1757209 ']' 00:24:20.508 21:27:54 -- nvmf/common.sh@478 -- # killprocess 1757209 00:24:20.508 21:27:54 -- common/autotest_common.sh@926 -- # '[' -z 1757209 ']' 00:24:20.508 21:27:54 -- common/autotest_common.sh@930 -- # kill -0 1757209 00:24:20.508 21:27:54 -- common/autotest_common.sh@931 -- # uname 00:24:20.508 21:27:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:20.508 21:27:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1757209 00:24:20.508 21:27:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:20.508 21:27:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:20.508 21:27:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1757209' 00:24:20.508 killing process with pid 1757209 00:24:20.508 21:27:54 -- common/autotest_common.sh@945 -- # kill 1757209 00:24:20.508 21:27:54 -- common/autotest_common.sh@950 -- # wait 1757209 00:24:20.508 21:27:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:20.508 21:27:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:20.508 00:24:20.508 real 1m14.132s 00:24:20.508 user 4m34.126s 00:24:20.508 sys 0m9.104s 00:24:20.508 21:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.508 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:24:20.508 ************************************ 00:24:20.508 END TEST nvmf_initiator_timeout 00:24:20.508 ************************************ 00:24:20.508 21:27:54 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:20.508 21:27:54 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:24:20.508 21:27:54 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:24:20.508 21:27:54 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:24:20.508 21:27:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:20.508 21:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:20.508 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:24:20.508 ************************************ 00:24:20.508 START TEST nvmf_shutdown 00:24:20.508 ************************************ 00:24:20.508 21:27:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:24:20.508 * Looking for test storage... 00:24:20.508 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:20.508 21:27:54 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.508 21:27:54 -- nvmf/common.sh@7 -- # uname -s 00:24:20.508 21:27:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.508 21:27:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.508 21:27:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.508 21:27:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.508 21:27:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.508 21:27:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.508 21:27:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.508 21:27:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.508 21:27:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.508 21:27:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.508 21:27:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:20.508 21:27:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:20.508 21:27:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.508 21:27:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.508 21:27:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.508 21:27:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:20.508 21:27:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.508 21:27:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.508 21:27:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.508 21:27:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.508 21:27:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.508 21:27:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.508 21:27:54 -- paths/export.sh@5 -- # export PATH 00:24:20.508 21:27:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.508 21:27:54 -- nvmf/common.sh@46 -- # : 0 00:24:20.509 21:27:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:20.509 21:27:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:20.509 21:27:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:20.509 21:27:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.509 21:27:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.509 21:27:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:20.509 21:27:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:20.509 21:27:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:20.509 21:27:54 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.509 21:27:54 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.509 21:27:54 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:20.509 21:27:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:20.509 21:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:20.509 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:24:20.509 ************************************ 00:24:20.509 START TEST nvmf_shutdown_tc1 00:24:20.509 ************************************ 00:24:20.509 21:27:54 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:20.509 21:27:54 -- target/shutdown.sh@74 -- # starttarget 00:24:20.509 21:27:54 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:20.509 21:27:54 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:20.509 21:27:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.509 21:27:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:20.509 21:27:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:20.509 21:27:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:20.509 21:27:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.509 21:27:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.509 21:27:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.509 21:27:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:20.509 21:27:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:20.509 21:27:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:20.509 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:24:28.625 21:28:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:28.625 21:28:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:28.625 21:28:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:28.625 21:28:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:28.625 21:28:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:28.625 21:28:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:28.625 21:28:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:28.625 21:28:02 -- nvmf/common.sh@294 -- # net_devs=() 00:24:28.625 21:28:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:28.625 21:28:02 -- nvmf/common.sh@295 -- # e810=() 00:24:28.625 21:28:02 -- nvmf/common.sh@295 -- # local -ga e810 00:24:28.625 21:28:02 -- nvmf/common.sh@296 -- # x722=() 00:24:28.625 21:28:02 -- nvmf/common.sh@296 -- # local -ga x722 00:24:28.625 21:28:02 -- nvmf/common.sh@297 -- # mlx=() 00:24:28.625 21:28:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:28.625 21:28:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.625 21:28:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:28.625 21:28:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:28.626 21:28:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:28.626 21:28:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:28.626 21:28:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:28.626 21:28:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:28.626 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:28.626 21:28:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:28.626 21:28:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:28.626 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:28.626 21:28:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:28.626 21:28:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:28.626 21:28:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.626 21:28:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.626 21:28:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.626 21:28:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:28.626 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.626 21:28:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.626 21:28:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.626 21:28:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.626 21:28:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:28.626 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.626 21:28:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:28.626 21:28:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:28.626 21:28:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:28.626 21:28:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:28.626 21:28:02 -- nvmf/common.sh@57 -- # uname 00:24:28.626 21:28:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:28.626 21:28:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:28.626 21:28:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:28.626 21:28:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:28.626 21:28:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:28.626 21:28:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:28.626 21:28:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:28.626 21:28:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:28.626 21:28:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:28.626 21:28:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:28.626 21:28:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:28.626 21:28:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:28.626 21:28:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:28.626 21:28:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:28.626 21:28:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:28.626 21:28:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:28.626 21:28:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@104 -- # continue 2 00:24:28.626 21:28:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@104 -- # continue 2 00:24:28.626 21:28:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:28.626 21:28:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.626 21:28:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:28.626 21:28:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:28.626 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:28.626 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:28.626 altname enp217s0f0np0 00:24:28.626 altname ens818f0np0 00:24:28.626 inet 192.168.100.8/24 scope global mlx_0_0 00:24:28.626 valid_lft forever preferred_lft forever 00:24:28.626 21:28:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:28.626 21:28:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.626 21:28:02 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:28.626 21:28:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:28.626 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:28.626 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:28.626 altname enp217s0f1np1 00:24:28.626 altname ens818f1np1 00:24:28.626 inet 192.168.100.9/24 scope global mlx_0_1 00:24:28.626 valid_lft forever preferred_lft forever 00:24:28.626 21:28:02 -- nvmf/common.sh@410 -- # return 0 00:24:28.626 21:28:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:28.626 21:28:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:28.626 21:28:02 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:28.626 21:28:02 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:28.626 21:28:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:28.626 21:28:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:28.626 21:28:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:28.626 21:28:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:28.626 21:28:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:28.626 21:28:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@104 -- # continue 2 00:24:28.626 21:28:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.626 21:28:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:28.626 21:28:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@104 -- # continue 2 00:24:28.626 21:28:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:28.626 21:28:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.626 21:28:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:28.626 21:28:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.626 21:28:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.626 21:28:02 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:28.626 192.168.100.9' 00:24:28.626 21:28:02 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:28.626 192.168.100.9' 00:24:28.626 21:28:02 -- nvmf/common.sh@445 -- # head -n 1 00:24:28.626 21:28:02 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:28.626 21:28:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:28.626 192.168.100.9' 00:24:28.626 21:28:02 -- nvmf/common.sh@446 -- # tail -n +2 00:24:28.626 21:28:02 -- nvmf/common.sh@446 -- # head -n 1 00:24:28.626 21:28:02 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:28.626 21:28:02 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:28.626 21:28:02 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:28.626 21:28:02 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:28.626 21:28:02 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:28.626 21:28:02 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:28.626 21:28:02 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:28.626 21:28:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:28.626 21:28:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:28.626 21:28:02 -- common/autotest_common.sh@10 -- # set +x 00:24:28.626 21:28:02 -- nvmf/common.sh@469 -- # nvmfpid=1772986 00:24:28.626 21:28:02 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:28.626 21:28:02 -- nvmf/common.sh@470 -- # waitforlisten 1772986 00:24:28.626 21:28:02 -- common/autotest_common.sh@819 -- # '[' -z 1772986 ']' 00:24:28.627 21:28:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.627 21:28:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:28.627 21:28:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.627 21:28:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:28.627 21:28:02 -- common/autotest_common.sh@10 -- # set +x 00:24:28.627 [2024-07-26 21:28:02.893263] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:28.627 [2024-07-26 21:28:02.893311] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.627 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.627 [2024-07-26 21:28:02.975392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.627 [2024-07-26 21:28:03.011779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:28.627 [2024-07-26 21:28:03.011888] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.627 [2024-07-26 21:28:03.011898] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.627 [2024-07-26 21:28:03.011906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.627 [2024-07-26 21:28:03.012010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.627 [2024-07-26 21:28:03.012102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.627 [2024-07-26 21:28:03.012214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.627 [2024-07-26 21:28:03.012215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:28.885 21:28:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:28.885 21:28:03 -- common/autotest_common.sh@852 -- # return 0 00:24:28.885 21:28:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:28.885 21:28:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:28.885 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:24:28.885 21:28:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.885 21:28:03 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:28.885 21:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.885 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.143 [2024-07-26 21:28:03.771584] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1261350/0x1265840) succeed. 00:24:29.143 [2024-07-26 21:28:03.782404] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1262940/0x12a6ed0) succeed. 00:24:29.143 21:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.143 21:28:03 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:29.143 21:28:03 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:29.143 21:28:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:29.143 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.143 21:28:03 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.143 21:28:03 -- target/shutdown.sh@28 -- # cat 00:24:29.143 21:28:03 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:29.143 21:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.143 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.143 Malloc1 00:24:29.143 [2024-07-26 21:28:04.009412] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:29.402 Malloc2 00:24:29.402 Malloc3 00:24:29.402 Malloc4 00:24:29.402 Malloc5 00:24:29.402 Malloc6 00:24:29.402 Malloc7 00:24:29.661 Malloc8 00:24:29.661 Malloc9 00:24:29.661 Malloc10 00:24:29.661 21:28:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.661 21:28:04 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:29.661 21:28:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:29.661 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:24:29.661 21:28:04 -- target/shutdown.sh@78 -- # perfpid=1773308 00:24:29.661 21:28:04 -- target/shutdown.sh@79 -- # waitforlisten 1773308 /var/tmp/bdevperf.sock 00:24:29.661 21:28:04 -- common/autotest_common.sh@819 -- # '[' -z 1773308 ']' 00:24:29.661 21:28:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.661 21:28:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:29.661 21:28:04 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:29.661 21:28:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.661 21:28:04 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:29.661 21:28:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:29.661 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:24:29.661 21:28:04 -- nvmf/common.sh@520 -- # config=() 00:24:29.661 21:28:04 -- nvmf/common.sh@520 -- # local subsystem config 00:24:29.661 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.661 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.661 { 00:24:29.661 "params": { 00:24:29.661 "name": "Nvme$subsystem", 00:24:29.661 "trtype": "$TEST_TRANSPORT", 00:24:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.661 "adrfam": "ipv4", 00:24:29.661 "trsvcid": "$NVMF_PORT", 00:24:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.661 "hdgst": ${hdgst:-false}, 00:24:29.661 "ddgst": ${ddgst:-false} 00:24:29.661 }, 00:24:29.661 "method": "bdev_nvme_attach_controller" 00:24:29.661 } 00:24:29.661 EOF 00:24:29.661 )") 00:24:29.661 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.661 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.661 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.661 { 00:24:29.661 "params": { 00:24:29.661 "name": "Nvme$subsystem", 00:24:29.661 "trtype": "$TEST_TRANSPORT", 00:24:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.661 "adrfam": "ipv4", 00:24:29.661 "trsvcid": "$NVMF_PORT", 00:24:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.661 "hdgst": ${hdgst:-false}, 00:24:29.661 "ddgst": ${ddgst:-false} 00:24:29.661 }, 00:24:29.661 "method": "bdev_nvme_attach_controller" 00:24:29.661 } 00:24:29.661 EOF 00:24:29.661 )") 00:24:29.661 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.661 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.661 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.661 { 00:24:29.661 "params": { 00:24:29.661 "name": "Nvme$subsystem", 00:24:29.661 "trtype": "$TEST_TRANSPORT", 00:24:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.661 "adrfam": "ipv4", 00:24:29.661 "trsvcid": "$NVMF_PORT", 00:24:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.661 "hdgst": ${hdgst:-false}, 00:24:29.661 "ddgst": ${ddgst:-false} 00:24:29.661 }, 00:24:29.661 "method": "bdev_nvme_attach_controller" 00:24:29.661 } 00:24:29.661 EOF 00:24:29.661 )") 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.662 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.662 { 00:24:29.662 "params": { 00:24:29.662 "name": "Nvme$subsystem", 00:24:29.662 "trtype": "$TEST_TRANSPORT", 00:24:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.662 "adrfam": "ipv4", 00:24:29.662 "trsvcid": "$NVMF_PORT", 00:24:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.662 "hdgst": ${hdgst:-false}, 00:24:29.662 "ddgst": ${ddgst:-false} 00:24:29.662 }, 00:24:29.662 "method": "bdev_nvme_attach_controller" 00:24:29.662 } 00:24:29.662 EOF 00:24:29.662 )") 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.662 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.662 { 00:24:29.662 "params": { 00:24:29.662 "name": "Nvme$subsystem", 00:24:29.662 "trtype": "$TEST_TRANSPORT", 00:24:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.662 "adrfam": "ipv4", 00:24:29.662 "trsvcid": "$NVMF_PORT", 00:24:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.662 "hdgst": ${hdgst:-false}, 00:24:29.662 "ddgst": ${ddgst:-false} 00:24:29.662 }, 00:24:29.662 "method": "bdev_nvme_attach_controller" 00:24:29.662 } 00:24:29.662 EOF 00:24:29.662 )") 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.662 [2024-07-26 21:28:04.491522] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:29.662 [2024-07-26 21:28:04.491574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:29.662 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.662 { 00:24:29.662 "params": { 00:24:29.662 "name": "Nvme$subsystem", 00:24:29.662 "trtype": "$TEST_TRANSPORT", 00:24:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.662 "adrfam": "ipv4", 00:24:29.662 "trsvcid": "$NVMF_PORT", 00:24:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.662 "hdgst": ${hdgst:-false}, 00:24:29.662 "ddgst": ${ddgst:-false} 00:24:29.662 }, 00:24:29.662 "method": "bdev_nvme_attach_controller" 00:24:29.662 } 00:24:29.662 EOF 00:24:29.662 )") 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.662 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.662 { 00:24:29.662 "params": { 00:24:29.662 "name": "Nvme$subsystem", 00:24:29.662 "trtype": "$TEST_TRANSPORT", 00:24:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.662 "adrfam": "ipv4", 00:24:29.662 "trsvcid": "$NVMF_PORT", 00:24:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.662 "hdgst": ${hdgst:-false}, 00:24:29.662 "ddgst": ${ddgst:-false} 00:24:29.662 }, 00:24:29.662 "method": "bdev_nvme_attach_controller" 00:24:29.662 } 00:24:29.662 EOF 00:24:29.662 )") 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.662 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.662 { 00:24:29.662 "params": { 00:24:29.662 "name": "Nvme$subsystem", 00:24:29.662 "trtype": "$TEST_TRANSPORT", 00:24:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.662 "adrfam": "ipv4", 00:24:29.662 "trsvcid": "$NVMF_PORT", 00:24:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.662 "hdgst": ${hdgst:-false}, 00:24:29.662 "ddgst": ${ddgst:-false} 00:24:29.662 }, 00:24:29.662 "method": "bdev_nvme_attach_controller" 00:24:29.662 } 00:24:29.662 EOF 00:24:29.662 )") 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.662 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.662 { 00:24:29.662 "params": { 00:24:29.662 "name": "Nvme$subsystem", 00:24:29.662 "trtype": "$TEST_TRANSPORT", 00:24:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.662 "adrfam": "ipv4", 00:24:29.662 "trsvcid": "$NVMF_PORT", 00:24:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.662 "hdgst": ${hdgst:-false}, 00:24:29.662 "ddgst": ${ddgst:-false} 00:24:29.662 }, 00:24:29.662 "method": "bdev_nvme_attach_controller" 00:24:29.662 } 00:24:29.662 EOF 00:24:29.662 )") 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.662 21:28:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.662 21:28:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.662 { 00:24:29.662 "params": { 00:24:29.662 "name": "Nvme$subsystem", 00:24:29.662 "trtype": "$TEST_TRANSPORT", 00:24:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.662 "adrfam": "ipv4", 00:24:29.662 "trsvcid": "$NVMF_PORT", 00:24:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.662 "hdgst": ${hdgst:-false}, 00:24:29.662 "ddgst": ${ddgst:-false} 00:24:29.662 }, 00:24:29.662 "method": "bdev_nvme_attach_controller" 00:24:29.662 } 00:24:29.662 EOF 00:24:29.662 )") 00:24:29.921 21:28:04 -- nvmf/common.sh@542 -- # cat 00:24:29.921 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.921 21:28:04 -- nvmf/common.sh@544 -- # jq . 00:24:29.921 21:28:04 -- nvmf/common.sh@545 -- # IFS=, 00:24:29.921 21:28:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme1", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme2", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme3", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme4", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme5", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme6", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme7", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme8", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme9", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 },{ 00:24:29.921 "params": { 00:24:29.921 "name": "Nvme10", 00:24:29.921 "trtype": "rdma", 00:24:29.921 "traddr": "192.168.100.8", 00:24:29.921 "adrfam": "ipv4", 00:24:29.921 "trsvcid": "4420", 00:24:29.921 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:29.921 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:29.921 "hdgst": false, 00:24:29.921 "ddgst": false 00:24:29.921 }, 00:24:29.921 "method": "bdev_nvme_attach_controller" 00:24:29.921 }' 00:24:29.921 [2024-07-26 21:28:04.580895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.922 [2024-07-26 21:28:04.617363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.297 21:28:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:31.297 21:28:05 -- common/autotest_common.sh@852 -- # return 0 00:24:31.297 21:28:05 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:31.297 21:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.297 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:24:31.297 21:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.297 21:28:05 -- target/shutdown.sh@83 -- # kill -9 1773308 00:24:31.297 21:28:05 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:31.297 21:28:05 -- target/shutdown.sh@87 -- # sleep 1 00:24:32.233 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1773308 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:32.233 21:28:06 -- target/shutdown.sh@88 -- # kill -0 1772986 00:24:32.233 21:28:06 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:32.233 21:28:06 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:32.233 21:28:06 -- nvmf/common.sh@520 -- # config=() 00:24:32.233 21:28:06 -- nvmf/common.sh@520 -- # local subsystem config 00:24:32.233 21:28:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.233 21:28:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.233 { 00:24:32.233 "params": { 00:24:32.233 "name": "Nvme$subsystem", 00:24:32.233 "trtype": "$TEST_TRANSPORT", 00:24:32.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.233 "adrfam": "ipv4", 00:24:32.233 "trsvcid": "$NVMF_PORT", 00:24:32.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.233 "hdgst": ${hdgst:-false}, 00:24:32.233 "ddgst": ${ddgst:-false} 00:24:32.233 }, 00:24:32.233 "method": "bdev_nvme_attach_controller" 00:24:32.233 } 00:24:32.233 EOF 00:24:32.233 )") 00:24:32.233 21:28:06 -- nvmf/common.sh@542 -- # cat 00:24:32.233 21:28:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.233 21:28:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.233 { 00:24:32.233 "params": { 00:24:32.233 "name": "Nvme$subsystem", 00:24:32.233 "trtype": "$TEST_TRANSPORT", 00:24:32.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.233 "adrfam": "ipv4", 00:24:32.233 "trsvcid": "$NVMF_PORT", 00:24:32.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.233 "hdgst": ${hdgst:-false}, 00:24:32.233 "ddgst": ${ddgst:-false} 00:24:32.233 }, 00:24:32.233 "method": "bdev_nvme_attach_controller" 00:24:32.233 } 00:24:32.233 EOF 00:24:32.233 )") 00:24:32.233 21:28:06 -- nvmf/common.sh@542 -- # cat 00:24:32.233 21:28:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.233 21:28:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.233 { 00:24:32.233 "params": { 00:24:32.233 "name": "Nvme$subsystem", 00:24:32.233 "trtype": "$TEST_TRANSPORT", 00:24:32.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.233 "adrfam": "ipv4", 00:24:32.233 "trsvcid": "$NVMF_PORT", 00:24:32.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.233 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 21:28:06 -- nvmf/common.sh@542 -- # cat 00:24:32.234 21:28:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.234 { 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme$subsystem", 00:24:32.234 "trtype": "$TEST_TRANSPORT", 00:24:32.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "$NVMF_PORT", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.234 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 [2024-07-26 21:28:07.006090] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:32.234 [2024-07-26 21:28:07.006143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773787 ] 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # cat 00:24:32.234 21:28:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.234 { 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme$subsystem", 00:24:32.234 "trtype": "$TEST_TRANSPORT", 00:24:32.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "$NVMF_PORT", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.234 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # cat 00:24:32.234 21:28:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.234 { 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme$subsystem", 00:24:32.234 "trtype": "$TEST_TRANSPORT", 00:24:32.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "$NVMF_PORT", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.234 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # cat 00:24:32.234 21:28:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.234 { 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme$subsystem", 00:24:32.234 "trtype": "$TEST_TRANSPORT", 00:24:32.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "$NVMF_PORT", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.234 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # cat 00:24:32.234 21:28:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.234 { 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme$subsystem", 00:24:32.234 "trtype": "$TEST_TRANSPORT", 00:24:32.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "$NVMF_PORT", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.234 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # cat 00:24:32.234 21:28:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.234 { 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme$subsystem", 00:24:32.234 "trtype": "$TEST_TRANSPORT", 00:24:32.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "$NVMF_PORT", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.234 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # cat 00:24:32.234 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.234 21:28:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.234 { 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme$subsystem", 00:24:32.234 "trtype": "$TEST_TRANSPORT", 00:24:32.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "$NVMF_PORT", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.234 "hdgst": ${hdgst:-false}, 00:24:32.234 "ddgst": ${ddgst:-false} 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 } 00:24:32.234 EOF 00:24:32.234 )") 00:24:32.234 21:28:07 -- nvmf/common.sh@542 -- # cat 00:24:32.234 21:28:07 -- nvmf/common.sh@544 -- # jq . 00:24:32.234 21:28:07 -- nvmf/common.sh@545 -- # IFS=, 00:24:32.234 21:28:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme1", 00:24:32.234 "trtype": "rdma", 00:24:32.234 "traddr": "192.168.100.8", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "4420", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.234 "hdgst": false, 00:24:32.234 "ddgst": false 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 },{ 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme2", 00:24:32.234 "trtype": "rdma", 00:24:32.234 "traddr": "192.168.100.8", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "4420", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:32.234 "hdgst": false, 00:24:32.234 "ddgst": false 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 },{ 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme3", 00:24:32.234 "trtype": "rdma", 00:24:32.234 "traddr": "192.168.100.8", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "4420", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:32.234 "hdgst": false, 00:24:32.234 "ddgst": false 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 },{ 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme4", 00:24:32.234 "trtype": "rdma", 00:24:32.234 "traddr": "192.168.100.8", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "4420", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:32.234 "hdgst": false, 00:24:32.234 "ddgst": false 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 },{ 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme5", 00:24:32.234 "trtype": "rdma", 00:24:32.234 "traddr": "192.168.100.8", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "4420", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:32.234 "hdgst": false, 00:24:32.234 "ddgst": false 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 },{ 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme6", 00:24:32.234 "trtype": "rdma", 00:24:32.234 "traddr": "192.168.100.8", 00:24:32.234 "adrfam": "ipv4", 00:24:32.234 "trsvcid": "4420", 00:24:32.234 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:32.234 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:32.234 "hdgst": false, 00:24:32.234 "ddgst": false 00:24:32.234 }, 00:24:32.234 "method": "bdev_nvme_attach_controller" 00:24:32.234 },{ 00:24:32.234 "params": { 00:24:32.234 "name": "Nvme7", 00:24:32.234 "trtype": "rdma", 00:24:32.234 "traddr": "192.168.100.8", 00:24:32.234 "adrfam": "ipv4", 00:24:32.235 "trsvcid": "4420", 00:24:32.235 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:32.235 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:32.235 "hdgst": false, 00:24:32.235 "ddgst": false 00:24:32.235 }, 00:24:32.235 "method": "bdev_nvme_attach_controller" 00:24:32.235 },{ 00:24:32.235 "params": { 00:24:32.235 "name": "Nvme8", 00:24:32.235 "trtype": "rdma", 00:24:32.235 "traddr": "192.168.100.8", 00:24:32.235 "adrfam": "ipv4", 00:24:32.235 "trsvcid": "4420", 00:24:32.235 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:32.235 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:32.235 "hdgst": false, 00:24:32.235 "ddgst": false 00:24:32.235 }, 00:24:32.235 "method": "bdev_nvme_attach_controller" 00:24:32.235 },{ 00:24:32.235 "params": { 00:24:32.235 "name": "Nvme9", 00:24:32.235 "trtype": "rdma", 00:24:32.235 "traddr": "192.168.100.8", 00:24:32.235 "adrfam": "ipv4", 00:24:32.235 "trsvcid": "4420", 00:24:32.235 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:32.235 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:32.235 "hdgst": false, 00:24:32.235 "ddgst": false 00:24:32.235 }, 00:24:32.235 "method": "bdev_nvme_attach_controller" 00:24:32.235 },{ 00:24:32.235 "params": { 00:24:32.235 "name": "Nvme10", 00:24:32.235 "trtype": "rdma", 00:24:32.235 "traddr": "192.168.100.8", 00:24:32.235 "adrfam": "ipv4", 00:24:32.235 "trsvcid": "4420", 00:24:32.235 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:32.235 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:32.235 "hdgst": false, 00:24:32.235 "ddgst": false 00:24:32.235 }, 00:24:32.235 "method": "bdev_nvme_attach_controller" 00:24:32.235 }' 00:24:32.235 [2024-07-26 21:28:07.093155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.494 [2024-07-26 21:28:07.130276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.430 Running I/O for 1 seconds... 00:24:34.367 00:24:34.367 Latency(us) 00:24:34.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.367 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme1n1 : 1.10 718.37 44.90 0.00 0.00 88056.20 7287.60 116601.65 00:24:34.367 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme2n1 : 1.10 735.92 46.00 0.00 0.00 85366.14 7549.75 111568.49 00:24:34.367 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme3n1 : 1.10 751.66 46.98 0.00 0.00 83069.37 7811.89 75078.04 00:24:34.367 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme4n1 : 1.10 750.98 46.94 0.00 0.00 82678.03 7969.18 73819.75 00:24:34.367 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme5n1 : 1.10 750.30 46.89 0.00 0.00 82273.97 8178.89 72142.03 00:24:34.367 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme6n1 : 1.10 749.63 46.85 0.00 0.00 81852.32 8388.61 70883.74 00:24:34.367 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme7n1 : 1.10 748.96 46.81 0.00 0.00 81429.01 8598.32 72142.03 00:24:34.367 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme8n1 : 1.10 748.28 46.77 0.00 0.00 81007.87 8808.04 73819.75 00:24:34.367 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme9n1 : 1.10 747.61 46.73 0.00 0.00 80586.69 9017.75 75497.47 00:24:34.367 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.367 Verification LBA range: start 0x0 length 0x400 00:24:34.367 Nvme10n1 : 1.10 552.33 34.52 0.00 0.00 108252.41 7602.18 327155.71 00:24:34.367 =================================================================================================================== 00:24:34.367 Total : 7254.04 453.38 0.00 0.00 84830.79 7287.60 327155.71 00:24:34.626 21:28:09 -- target/shutdown.sh@93 -- # stoptarget 00:24:34.626 21:28:09 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:34.626 21:28:09 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:34.626 21:28:09 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:34.626 21:28:09 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:34.626 21:28:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:34.626 21:28:09 -- nvmf/common.sh@116 -- # sync 00:24:34.626 21:28:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:34.626 21:28:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:34.626 21:28:09 -- nvmf/common.sh@119 -- # set +e 00:24:34.626 21:28:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:34.626 21:28:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:34.626 rmmod nvme_rdma 00:24:34.626 rmmod nvme_fabrics 00:24:34.626 21:28:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:34.626 21:28:09 -- nvmf/common.sh@123 -- # set -e 00:24:34.626 21:28:09 -- nvmf/common.sh@124 -- # return 0 00:24:34.626 21:28:09 -- nvmf/common.sh@477 -- # '[' -n 1772986 ']' 00:24:34.626 21:28:09 -- nvmf/common.sh@478 -- # killprocess 1772986 00:24:34.626 21:28:09 -- common/autotest_common.sh@926 -- # '[' -z 1772986 ']' 00:24:34.626 21:28:09 -- common/autotest_common.sh@930 -- # kill -0 1772986 00:24:34.626 21:28:09 -- common/autotest_common.sh@931 -- # uname 00:24:34.626 21:28:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:34.626 21:28:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1772986 00:24:34.626 21:28:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:34.626 21:28:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:34.626 21:28:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1772986' 00:24:34.626 killing process with pid 1772986 00:24:34.626 21:28:09 -- common/autotest_common.sh@945 -- # kill 1772986 00:24:34.626 21:28:09 -- common/autotest_common.sh@950 -- # wait 1772986 00:24:35.285 21:28:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:35.285 21:28:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:35.285 00:24:35.285 real 0m15.332s 00:24:35.285 user 0m33.073s 00:24:35.285 sys 0m7.517s 00:24:35.285 21:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.285 21:28:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.285 ************************************ 00:24:35.285 END TEST nvmf_shutdown_tc1 00:24:35.285 ************************************ 00:24:35.285 21:28:09 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:35.285 21:28:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:35.285 21:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:35.285 21:28:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.285 ************************************ 00:24:35.285 START TEST nvmf_shutdown_tc2 00:24:35.285 ************************************ 00:24:35.285 21:28:09 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:24:35.285 21:28:09 -- target/shutdown.sh@98 -- # starttarget 00:24:35.285 21:28:09 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:35.285 21:28:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:35.285 21:28:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.285 21:28:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.285 21:28:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.285 21:28:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.285 21:28:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.285 21:28:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.285 21:28:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.285 21:28:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:35.285 21:28:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:35.285 21:28:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:35.285 21:28:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.285 21:28:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:35.285 21:28:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:35.285 21:28:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:35.285 21:28:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:35.285 21:28:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:35.285 21:28:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:35.285 21:28:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:35.285 21:28:09 -- nvmf/common.sh@294 -- # net_devs=() 00:24:35.285 21:28:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:35.285 21:28:09 -- nvmf/common.sh@295 -- # e810=() 00:24:35.285 21:28:09 -- nvmf/common.sh@295 -- # local -ga e810 00:24:35.285 21:28:09 -- nvmf/common.sh@296 -- # x722=() 00:24:35.285 21:28:09 -- nvmf/common.sh@296 -- # local -ga x722 00:24:35.285 21:28:09 -- nvmf/common.sh@297 -- # mlx=() 00:24:35.285 21:28:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:35.285 21:28:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.285 21:28:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:35.285 21:28:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:35.285 21:28:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:35.285 21:28:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:35.285 21:28:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:35.285 21:28:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:35.285 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:35.285 21:28:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:35.285 21:28:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:35.285 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:35.285 21:28:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:35.285 21:28:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:35.285 21:28:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.285 21:28:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:35.285 21:28:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.285 21:28:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:35.285 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:35.285 21:28:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.285 21:28:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.285 21:28:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:35.285 21:28:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.285 21:28:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:35.285 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:35.285 21:28:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.285 21:28:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:35.285 21:28:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:35.285 21:28:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:35.285 21:28:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:35.285 21:28:10 -- nvmf/common.sh@57 -- # uname 00:24:35.285 21:28:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:35.285 21:28:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:35.285 21:28:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:35.285 21:28:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:35.285 21:28:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:35.285 21:28:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:35.285 21:28:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:35.285 21:28:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:35.285 21:28:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:35.285 21:28:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:35.285 21:28:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:35.285 21:28:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:35.285 21:28:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:35.285 21:28:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:35.285 21:28:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:35.285 21:28:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:35.285 21:28:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:35.285 21:28:10 -- nvmf/common.sh@104 -- # continue 2 00:24:35.285 21:28:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.285 21:28:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:35.285 21:28:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:35.285 21:28:10 -- nvmf/common.sh@104 -- # continue 2 00:24:35.285 21:28:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:35.285 21:28:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:35.285 21:28:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:35.286 21:28:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:35.286 21:28:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.286 21:28:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.286 21:28:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:35.286 21:28:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:35.286 21:28:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:35.286 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:35.286 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:35.286 altname enp217s0f0np0 00:24:35.286 altname ens818f0np0 00:24:35.286 inet 192.168.100.8/24 scope global mlx_0_0 00:24:35.286 valid_lft forever preferred_lft forever 00:24:35.286 21:28:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:35.286 21:28:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:35.286 21:28:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:35.286 21:28:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:35.286 21:28:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.286 21:28:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.545 21:28:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:35.545 21:28:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:35.545 21:28:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:35.545 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:35.545 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:35.545 altname enp217s0f1np1 00:24:35.545 altname ens818f1np1 00:24:35.545 inet 192.168.100.9/24 scope global mlx_0_1 00:24:35.545 valid_lft forever preferred_lft forever 00:24:35.545 21:28:10 -- nvmf/common.sh@410 -- # return 0 00:24:35.545 21:28:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:35.545 21:28:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:35.545 21:28:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:35.545 21:28:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:35.545 21:28:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:35.545 21:28:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:35.545 21:28:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:35.545 21:28:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:35.545 21:28:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:35.545 21:28:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:35.545 21:28:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.545 21:28:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.545 21:28:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:35.545 21:28:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:35.545 21:28:10 -- nvmf/common.sh@104 -- # continue 2 00:24:35.545 21:28:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:35.545 21:28:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.545 21:28:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:35.545 21:28:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:35.545 21:28:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:35.545 21:28:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:35.545 21:28:10 -- nvmf/common.sh@104 -- # continue 2 00:24:35.545 21:28:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:35.545 21:28:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:35.545 21:28:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:35.545 21:28:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:35.545 21:28:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.545 21:28:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.545 21:28:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:35.545 21:28:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:35.545 21:28:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:35.545 21:28:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:35.545 21:28:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:35.545 21:28:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:35.545 21:28:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:35.545 192.168.100.9' 00:24:35.545 21:28:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:35.545 192.168.100.9' 00:24:35.545 21:28:10 -- nvmf/common.sh@445 -- # head -n 1 00:24:35.545 21:28:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:35.545 21:28:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:35.545 192.168.100.9' 00:24:35.545 21:28:10 -- nvmf/common.sh@446 -- # tail -n +2 00:24:35.545 21:28:10 -- nvmf/common.sh@446 -- # head -n 1 00:24:35.546 21:28:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:35.546 21:28:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:35.546 21:28:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:35.546 21:28:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:35.546 21:28:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:35.546 21:28:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:35.546 21:28:10 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:35.546 21:28:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:35.546 21:28:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:35.546 21:28:10 -- common/autotest_common.sh@10 -- # set +x 00:24:35.546 21:28:10 -- nvmf/common.sh@469 -- # nvmfpid=1774508 00:24:35.546 21:28:10 -- nvmf/common.sh@470 -- # waitforlisten 1774508 00:24:35.546 21:28:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:35.546 21:28:10 -- common/autotest_common.sh@819 -- # '[' -z 1774508 ']' 00:24:35.546 21:28:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.546 21:28:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:35.546 21:28:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.546 21:28:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:35.546 21:28:10 -- common/autotest_common.sh@10 -- # set +x 00:24:35.546 [2024-07-26 21:28:10.275236] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:35.546 [2024-07-26 21:28:10.275289] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.546 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.546 [2024-07-26 21:28:10.360958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:35.546 [2024-07-26 21:28:10.399913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:35.546 [2024-07-26 21:28:10.400019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.546 [2024-07-26 21:28:10.400029] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.546 [2024-07-26 21:28:10.400038] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.546 [2024-07-26 21:28:10.400136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.546 [2024-07-26 21:28:10.400218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.546 [2024-07-26 21:28:10.400326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.546 [2024-07-26 21:28:10.400327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:36.481 21:28:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:36.481 21:28:11 -- common/autotest_common.sh@852 -- # return 0 00:24:36.481 21:28:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:36.481 21:28:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:36.481 21:28:11 -- common/autotest_common.sh@10 -- # set +x 00:24:36.481 21:28:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.481 21:28:11 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:36.481 21:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:36.481 21:28:11 -- common/autotest_common.sh@10 -- # set +x 00:24:36.481 [2024-07-26 21:28:11.148852] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1669350/0x166d840) succeed. 00:24:36.481 [2024-07-26 21:28:11.159368] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x166a940/0x16aeed0) succeed. 00:24:36.481 21:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:36.481 21:28:11 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:36.481 21:28:11 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:36.481 21:28:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:36.481 21:28:11 -- common/autotest_common.sh@10 -- # set +x 00:24:36.481 21:28:11 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:36.481 21:28:11 -- target/shutdown.sh@28 -- # cat 00:24:36.481 21:28:11 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:36.481 21:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:36.481 21:28:11 -- common/autotest_common.sh@10 -- # set +x 00:24:36.739 Malloc1 00:24:36.739 [2024-07-26 21:28:11.377930] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:36.739 Malloc2 00:24:36.739 Malloc3 00:24:36.739 Malloc4 00:24:36.739 Malloc5 00:24:36.739 Malloc6 00:24:36.998 Malloc7 00:24:36.998 Malloc8 00:24:36.998 Malloc9 00:24:36.998 Malloc10 00:24:36.998 21:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:36.998 21:28:11 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:36.998 21:28:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:36.998 21:28:11 -- common/autotest_common.sh@10 -- # set +x 00:24:36.998 21:28:11 -- target/shutdown.sh@102 -- # perfpid=1774821 00:24:36.998 21:28:11 -- target/shutdown.sh@103 -- # waitforlisten 1774821 /var/tmp/bdevperf.sock 00:24:36.998 21:28:11 -- common/autotest_common.sh@819 -- # '[' -z 1774821 ']' 00:24:36.998 21:28:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.998 21:28:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:36.998 21:28:11 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:36.998 21:28:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.998 21:28:11 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:36.998 21:28:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:36.998 21:28:11 -- common/autotest_common.sh@10 -- # set +x 00:24:36.998 21:28:11 -- nvmf/common.sh@520 -- # config=() 00:24:36.998 21:28:11 -- nvmf/common.sh@520 -- # local subsystem config 00:24:36.998 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.998 { 00:24:36.998 "params": { 00:24:36.998 "name": "Nvme$subsystem", 00:24:36.998 "trtype": "$TEST_TRANSPORT", 00:24:36.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.998 "adrfam": "ipv4", 00:24:36.998 "trsvcid": "$NVMF_PORT", 00:24:36.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.998 "hdgst": ${hdgst:-false}, 00:24:36.998 "ddgst": ${ddgst:-false} 00:24:36.998 }, 00:24:36.998 "method": "bdev_nvme_attach_controller" 00:24:36.998 } 00:24:36.998 EOF 00:24:36.998 )") 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:36.998 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.998 { 00:24:36.998 "params": { 00:24:36.998 "name": "Nvme$subsystem", 00:24:36.998 "trtype": "$TEST_TRANSPORT", 00:24:36.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.998 "adrfam": "ipv4", 00:24:36.998 "trsvcid": "$NVMF_PORT", 00:24:36.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.998 "hdgst": ${hdgst:-false}, 00:24:36.998 "ddgst": ${ddgst:-false} 00:24:36.998 }, 00:24:36.998 "method": "bdev_nvme_attach_controller" 00:24:36.998 } 00:24:36.998 EOF 00:24:36.998 )") 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:36.998 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.998 { 00:24:36.998 "params": { 00:24:36.998 "name": "Nvme$subsystem", 00:24:36.998 "trtype": "$TEST_TRANSPORT", 00:24:36.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.998 "adrfam": "ipv4", 00:24:36.998 "trsvcid": "$NVMF_PORT", 00:24:36.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.998 "hdgst": ${hdgst:-false}, 00:24:36.998 "ddgst": ${ddgst:-false} 00:24:36.998 }, 00:24:36.998 "method": "bdev_nvme_attach_controller" 00:24:36.998 } 00:24:36.998 EOF 00:24:36.998 )") 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:36.998 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.998 { 00:24:36.998 "params": { 00:24:36.998 "name": "Nvme$subsystem", 00:24:36.998 "trtype": "$TEST_TRANSPORT", 00:24:36.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.998 "adrfam": "ipv4", 00:24:36.998 "trsvcid": "$NVMF_PORT", 00:24:36.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.998 "hdgst": ${hdgst:-false}, 00:24:36.998 "ddgst": ${ddgst:-false} 00:24:36.998 }, 00:24:36.998 "method": "bdev_nvme_attach_controller" 00:24:36.998 } 00:24:36.998 EOF 00:24:36.998 )") 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:36.998 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.998 { 00:24:36.998 "params": { 00:24:36.998 "name": "Nvme$subsystem", 00:24:36.998 "trtype": "$TEST_TRANSPORT", 00:24:36.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.998 "adrfam": "ipv4", 00:24:36.998 "trsvcid": "$NVMF_PORT", 00:24:36.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.998 "hdgst": ${hdgst:-false}, 00:24:36.998 "ddgst": ${ddgst:-false} 00:24:36.998 }, 00:24:36.998 "method": "bdev_nvme_attach_controller" 00:24:36.998 } 00:24:36.998 EOF 00:24:36.998 )") 00:24:36.998 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:37.258 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:37.258 { 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme$subsystem", 00:24:37.258 "trtype": "$TEST_TRANSPORT", 00:24:37.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "$NVMF_PORT", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.258 "hdgst": ${hdgst:-false}, 00:24:37.258 "ddgst": ${ddgst:-false} 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 } 00:24:37.258 EOF 00:24:37.258 )") 00:24:37.258 [2024-07-26 21:28:11.870356] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:37.258 [2024-07-26 21:28:11.870412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774821 ] 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:37.258 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:37.258 { 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme$subsystem", 00:24:37.258 "trtype": "$TEST_TRANSPORT", 00:24:37.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "$NVMF_PORT", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.258 "hdgst": ${hdgst:-false}, 00:24:37.258 "ddgst": ${ddgst:-false} 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 } 00:24:37.258 EOF 00:24:37.258 )") 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:37.258 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:37.258 { 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme$subsystem", 00:24:37.258 "trtype": "$TEST_TRANSPORT", 00:24:37.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "$NVMF_PORT", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.258 "hdgst": ${hdgst:-false}, 00:24:37.258 "ddgst": ${ddgst:-false} 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 } 00:24:37.258 EOF 00:24:37.258 )") 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:37.258 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:37.258 { 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme$subsystem", 00:24:37.258 "trtype": "$TEST_TRANSPORT", 00:24:37.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "$NVMF_PORT", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.258 "hdgst": ${hdgst:-false}, 00:24:37.258 "ddgst": ${ddgst:-false} 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 } 00:24:37.258 EOF 00:24:37.258 )") 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:37.258 21:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:37.258 { 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme$subsystem", 00:24:37.258 "trtype": "$TEST_TRANSPORT", 00:24:37.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "$NVMF_PORT", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.258 "hdgst": ${hdgst:-false}, 00:24:37.258 "ddgst": ${ddgst:-false} 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 } 00:24:37.258 EOF 00:24:37.258 )") 00:24:37.258 21:28:11 -- nvmf/common.sh@542 -- # cat 00:24:37.258 21:28:11 -- nvmf/common.sh@544 -- # jq . 00:24:37.258 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.258 21:28:11 -- nvmf/common.sh@545 -- # IFS=, 00:24:37.258 21:28:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme1", 00:24:37.258 "trtype": "rdma", 00:24:37.258 "traddr": "192.168.100.8", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "4420", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.258 "hdgst": false, 00:24:37.258 "ddgst": false 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 },{ 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme2", 00:24:37.258 "trtype": "rdma", 00:24:37.258 "traddr": "192.168.100.8", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "4420", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:37.258 "hdgst": false, 00:24:37.258 "ddgst": false 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 },{ 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme3", 00:24:37.258 "trtype": "rdma", 00:24:37.258 "traddr": "192.168.100.8", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "4420", 00:24:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:37.258 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:37.258 "hdgst": false, 00:24:37.258 "ddgst": false 00:24:37.258 }, 00:24:37.258 "method": "bdev_nvme_attach_controller" 00:24:37.258 },{ 00:24:37.258 "params": { 00:24:37.258 "name": "Nvme4", 00:24:37.258 "trtype": "rdma", 00:24:37.258 "traddr": "192.168.100.8", 00:24:37.258 "adrfam": "ipv4", 00:24:37.258 "trsvcid": "4420", 00:24:37.259 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:37.259 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:37.259 "hdgst": false, 00:24:37.259 "ddgst": false 00:24:37.259 }, 00:24:37.259 "method": "bdev_nvme_attach_controller" 00:24:37.259 },{ 00:24:37.259 "params": { 00:24:37.259 "name": "Nvme5", 00:24:37.259 "trtype": "rdma", 00:24:37.259 "traddr": "192.168.100.8", 00:24:37.259 "adrfam": "ipv4", 00:24:37.259 "trsvcid": "4420", 00:24:37.259 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:37.259 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:37.259 "hdgst": false, 00:24:37.259 "ddgst": false 00:24:37.259 }, 00:24:37.259 "method": "bdev_nvme_attach_controller" 00:24:37.259 },{ 00:24:37.259 "params": { 00:24:37.259 "name": "Nvme6", 00:24:37.259 "trtype": "rdma", 00:24:37.259 "traddr": "192.168.100.8", 00:24:37.259 "adrfam": "ipv4", 00:24:37.259 "trsvcid": "4420", 00:24:37.259 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:37.259 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:37.259 "hdgst": false, 00:24:37.259 "ddgst": false 00:24:37.259 }, 00:24:37.259 "method": "bdev_nvme_attach_controller" 00:24:37.259 },{ 00:24:37.259 "params": { 00:24:37.259 "name": "Nvme7", 00:24:37.259 "trtype": "rdma", 00:24:37.259 "traddr": "192.168.100.8", 00:24:37.259 "adrfam": "ipv4", 00:24:37.259 "trsvcid": "4420", 00:24:37.259 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:37.259 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:37.259 "hdgst": false, 00:24:37.259 "ddgst": false 00:24:37.259 }, 00:24:37.259 "method": "bdev_nvme_attach_controller" 00:24:37.259 },{ 00:24:37.259 "params": { 00:24:37.259 "name": "Nvme8", 00:24:37.259 "trtype": "rdma", 00:24:37.259 "traddr": "192.168.100.8", 00:24:37.259 "adrfam": "ipv4", 00:24:37.259 "trsvcid": "4420", 00:24:37.259 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:37.259 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:37.259 "hdgst": false, 00:24:37.259 "ddgst": false 00:24:37.259 }, 00:24:37.259 "method": "bdev_nvme_attach_controller" 00:24:37.259 },{ 00:24:37.259 "params": { 00:24:37.259 "name": "Nvme9", 00:24:37.259 "trtype": "rdma", 00:24:37.259 "traddr": "192.168.100.8", 00:24:37.259 "adrfam": "ipv4", 00:24:37.259 "trsvcid": "4420", 00:24:37.259 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:37.259 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:37.259 "hdgst": false, 00:24:37.259 "ddgst": false 00:24:37.259 }, 00:24:37.259 "method": "bdev_nvme_attach_controller" 00:24:37.259 },{ 00:24:37.259 "params": { 00:24:37.259 "name": "Nvme10", 00:24:37.259 "trtype": "rdma", 00:24:37.259 "traddr": "192.168.100.8", 00:24:37.259 "adrfam": "ipv4", 00:24:37.259 "trsvcid": "4420", 00:24:37.259 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:37.259 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:37.259 "hdgst": false, 00:24:37.259 "ddgst": false 00:24:37.259 }, 00:24:37.259 "method": "bdev_nvme_attach_controller" 00:24:37.259 }' 00:24:37.259 [2024-07-26 21:28:11.959585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.259 [2024-07-26 21:28:11.996059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.194 Running I/O for 10 seconds... 00:24:38.762 21:28:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:38.762 21:28:13 -- common/autotest_common.sh@852 -- # return 0 00:24:38.762 21:28:13 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:38.762 21:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:38.762 21:28:13 -- common/autotest_common.sh@10 -- # set +x 00:24:38.762 21:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:38.762 21:28:13 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:38.762 21:28:13 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:38.762 21:28:13 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:38.762 21:28:13 -- target/shutdown.sh@57 -- # local ret=1 00:24:38.762 21:28:13 -- target/shutdown.sh@58 -- # local i 00:24:38.762 21:28:13 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:38.762 21:28:13 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:38.762 21:28:13 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:38.762 21:28:13 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:38.762 21:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:38.762 21:28:13 -- common/autotest_common.sh@10 -- # set +x 00:24:38.762 21:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:38.762 21:28:13 -- target/shutdown.sh@60 -- # read_io_count=446 00:24:38.762 21:28:13 -- target/shutdown.sh@63 -- # '[' 446 -ge 100 ']' 00:24:38.762 21:28:13 -- target/shutdown.sh@64 -- # ret=0 00:24:38.762 21:28:13 -- target/shutdown.sh@65 -- # break 00:24:38.762 21:28:13 -- target/shutdown.sh@69 -- # return 0 00:24:38.762 21:28:13 -- target/shutdown.sh@109 -- # killprocess 1774821 00:24:38.762 21:28:13 -- common/autotest_common.sh@926 -- # '[' -z 1774821 ']' 00:24:38.762 21:28:13 -- common/autotest_common.sh@930 -- # kill -0 1774821 00:24:38.762 21:28:13 -- common/autotest_common.sh@931 -- # uname 00:24:38.762 21:28:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:38.762 21:28:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1774821 00:24:39.021 21:28:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:39.021 21:28:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:39.021 21:28:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1774821' 00:24:39.021 killing process with pid 1774821 00:24:39.021 21:28:13 -- common/autotest_common.sh@945 -- # kill 1774821 00:24:39.021 21:28:13 -- common/autotest_common.sh@950 -- # wait 1774821 00:24:39.021 Received shutdown signal, test time was about 0.887049 seconds 00:24:39.021 00:24:39.021 Latency(us) 00:24:39.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.021 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme1n1 : 0.88 724.03 45.25 0.00 0.00 87065.58 7235.17 120795.96 00:24:39.021 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme2n1 : 0.88 747.06 46.69 0.00 0.00 83714.81 7444.89 74658.61 00:24:39.021 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme3n1 : 0.88 749.60 46.85 0.00 0.00 82744.66 7759.46 71722.60 00:24:39.021 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme4n1 : 0.88 752.10 47.01 0.00 0.00 81897.35 8021.61 70044.88 00:24:39.021 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme5n1 : 0.88 744.38 46.52 0.00 0.00 82128.44 8336.18 69206.02 00:24:39.021 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme6n1 : 0.88 743.51 46.47 0.00 0.00 81633.40 8598.32 70464.31 00:24:39.021 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme7n1 : 0.88 742.64 46.42 0.00 0.00 81102.48 8808.04 71722.60 00:24:39.021 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme8n1 : 0.88 741.76 46.36 0.00 0.00 80560.37 9122.61 73400.32 00:24:39.021 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme9n1 : 0.89 740.90 46.31 0.00 0.00 80051.34 9384.76 75497.47 00:24:39.021 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.021 Verification LBA range: start 0x0 length 0x400 00:24:39.021 Nvme10n1 : 0.89 489.70 30.61 0.00 0.00 119775.22 7602.18 335544.32 00:24:39.021 =================================================================================================================== 00:24:39.021 Total : 7175.68 448.48 0.00 0.00 84875.60 7235.17 335544.32 00:24:39.279 21:28:14 -- target/shutdown.sh@112 -- # sleep 1 00:24:40.213 21:28:15 -- target/shutdown.sh@113 -- # kill -0 1774508 00:24:40.213 21:28:15 -- target/shutdown.sh@115 -- # stoptarget 00:24:40.213 21:28:15 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:40.213 21:28:15 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:40.213 21:28:15 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:40.213 21:28:15 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:40.213 21:28:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:40.213 21:28:15 -- nvmf/common.sh@116 -- # sync 00:24:40.213 21:28:15 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:40.213 21:28:15 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:40.213 21:28:15 -- nvmf/common.sh@119 -- # set +e 00:24:40.213 21:28:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:40.213 21:28:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:40.213 rmmod nvme_rdma 00:24:40.213 rmmod nvme_fabrics 00:24:40.213 21:28:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:40.471 21:28:15 -- nvmf/common.sh@123 -- # set -e 00:24:40.471 21:28:15 -- nvmf/common.sh@124 -- # return 0 00:24:40.471 21:28:15 -- nvmf/common.sh@477 -- # '[' -n 1774508 ']' 00:24:40.471 21:28:15 -- nvmf/common.sh@478 -- # killprocess 1774508 00:24:40.471 21:28:15 -- common/autotest_common.sh@926 -- # '[' -z 1774508 ']' 00:24:40.471 21:28:15 -- common/autotest_common.sh@930 -- # kill -0 1774508 00:24:40.471 21:28:15 -- common/autotest_common.sh@931 -- # uname 00:24:40.471 21:28:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:40.471 21:28:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1774508 00:24:40.471 21:28:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:40.471 21:28:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:40.471 21:28:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1774508' 00:24:40.471 killing process with pid 1774508 00:24:40.471 21:28:15 -- common/autotest_common.sh@945 -- # kill 1774508 00:24:40.471 21:28:15 -- common/autotest_common.sh@950 -- # wait 1774508 00:24:40.730 21:28:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:40.730 21:28:15 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:40.730 00:24:40.730 real 0m5.599s 00:24:40.730 user 0m22.672s 00:24:40.730 sys 0m1.214s 00:24:40.730 21:28:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.730 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:24:40.730 ************************************ 00:24:40.730 END TEST nvmf_shutdown_tc2 00:24:40.730 ************************************ 00:24:40.989 21:28:15 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:40.989 21:28:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:40.989 21:28:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:40.989 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:24:40.989 ************************************ 00:24:40.989 START TEST nvmf_shutdown_tc3 00:24:40.989 ************************************ 00:24:40.989 21:28:15 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:24:40.989 21:28:15 -- target/shutdown.sh@120 -- # starttarget 00:24:40.989 21:28:15 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:40.989 21:28:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:40.989 21:28:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.989 21:28:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:40.989 21:28:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:40.989 21:28:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:40.989 21:28:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.989 21:28:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.989 21:28:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.989 21:28:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:40.989 21:28:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:40.989 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:24:40.989 21:28:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.989 21:28:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.989 21:28:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.989 21:28:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.989 21:28:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.989 21:28:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.989 21:28:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.989 21:28:15 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.989 21:28:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.989 21:28:15 -- nvmf/common.sh@295 -- # e810=() 00:24:40.989 21:28:15 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.989 21:28:15 -- nvmf/common.sh@296 -- # x722=() 00:24:40.989 21:28:15 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.989 21:28:15 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.989 21:28:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.989 21:28:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.989 21:28:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.989 21:28:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:40.989 21:28:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:40.989 21:28:15 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:40.989 21:28:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.989 21:28:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.989 21:28:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:40.989 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:40.989 21:28:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.989 21:28:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.989 21:28:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:40.989 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:40.989 21:28:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:40.989 21:28:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.990 21:28:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.990 21:28:15 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.990 21:28:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.990 21:28:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.990 21:28:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:40.990 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.990 21:28:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.990 21:28:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.990 21:28:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.990 21:28:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:40.990 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.990 21:28:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:40.990 21:28:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:40.990 21:28:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:40.990 21:28:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:40.990 21:28:15 -- nvmf/common.sh@57 -- # uname 00:24:40.990 21:28:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:40.990 21:28:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:40.990 21:28:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:40.990 21:28:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:40.990 21:28:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:40.990 21:28:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:40.990 21:28:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:40.990 21:28:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:40.990 21:28:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:40.990 21:28:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:40.990 21:28:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:40.990 21:28:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.990 21:28:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:40.990 21:28:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:40.990 21:28:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.990 21:28:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:40.990 21:28:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@104 -- # continue 2 00:24:40.990 21:28:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@104 -- # continue 2 00:24:40.990 21:28:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:40.990 21:28:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.990 21:28:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:40.990 21:28:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:40.990 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.990 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:40.990 altname enp217s0f0np0 00:24:40.990 altname ens818f0np0 00:24:40.990 inet 192.168.100.8/24 scope global mlx_0_0 00:24:40.990 valid_lft forever preferred_lft forever 00:24:40.990 21:28:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:40.990 21:28:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.990 21:28:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:40.990 21:28:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:40.990 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.990 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:40.990 altname enp217s0f1np1 00:24:40.990 altname ens818f1np1 00:24:40.990 inet 192.168.100.9/24 scope global mlx_0_1 00:24:40.990 valid_lft forever preferred_lft forever 00:24:40.990 21:28:15 -- nvmf/common.sh@410 -- # return 0 00:24:40.990 21:28:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:40.990 21:28:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:40.990 21:28:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:40.990 21:28:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:40.990 21:28:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.990 21:28:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:40.990 21:28:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:40.990 21:28:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.990 21:28:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:40.990 21:28:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@104 -- # continue 2 00:24:40.990 21:28:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.990 21:28:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.990 21:28:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@104 -- # continue 2 00:24:40.990 21:28:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:40.990 21:28:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.990 21:28:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:40.990 21:28:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.990 21:28:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.990 21:28:15 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:40.990 192.168.100.9' 00:24:40.990 21:28:15 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:40.990 192.168.100.9' 00:24:40.990 21:28:15 -- nvmf/common.sh@445 -- # head -n 1 00:24:40.990 21:28:15 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:40.990 21:28:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:40.990 192.168.100.9' 00:24:40.990 21:28:15 -- nvmf/common.sh@446 -- # tail -n +2 00:24:40.990 21:28:15 -- nvmf/common.sh@446 -- # head -n 1 00:24:40.990 21:28:15 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:40.990 21:28:15 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:40.990 21:28:15 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:40.990 21:28:15 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:40.990 21:28:15 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:40.990 21:28:15 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:41.249 21:28:15 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:41.249 21:28:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:41.249 21:28:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:41.249 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:24:41.249 21:28:15 -- nvmf/common.sh@469 -- # nvmfpid=1775498 00:24:41.249 21:28:15 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:41.250 21:28:15 -- nvmf/common.sh@470 -- # waitforlisten 1775498 00:24:41.250 21:28:15 -- common/autotest_common.sh@819 -- # '[' -z 1775498 ']' 00:24:41.250 21:28:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.250 21:28:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:41.250 21:28:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.250 21:28:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:41.250 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:24:41.250 [2024-07-26 21:28:15.921204] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:41.250 [2024-07-26 21:28:15.921259] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.250 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.250 [2024-07-26 21:28:16.005288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:41.250 [2024-07-26 21:28:16.041543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:41.250 [2024-07-26 21:28:16.041657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.250 [2024-07-26 21:28:16.041684] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.250 [2024-07-26 21:28:16.041694] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.250 [2024-07-26 21:28:16.041800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.250 [2024-07-26 21:28:16.041887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:41.250 [2024-07-26 21:28:16.041976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.250 [2024-07-26 21:28:16.041977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:42.186 21:28:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:42.186 21:28:16 -- common/autotest_common.sh@852 -- # return 0 00:24:42.186 21:28:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:42.186 21:28:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:42.186 21:28:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.186 21:28:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.186 21:28:16 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:42.186 21:28:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.186 21:28:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.186 [2024-07-26 21:28:16.795699] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfcb350/0xfcf840) succeed. 00:24:42.186 [2024-07-26 21:28:16.805885] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfcc940/0x1010ed0) succeed. 00:24:42.186 21:28:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.186 21:28:16 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:42.186 21:28:16 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:42.186 21:28:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:42.186 21:28:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.186 21:28:16 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:42.186 21:28:16 -- target/shutdown.sh@28 -- # cat 00:24:42.186 21:28:16 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:42.186 21:28:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.186 21:28:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.186 Malloc1 00:24:42.186 [2024-07-26 21:28:17.028861] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:42.186 Malloc2 00:24:42.445 Malloc3 00:24:42.445 Malloc4 00:24:42.445 Malloc5 00:24:42.445 Malloc6 00:24:42.445 Malloc7 00:24:42.705 Malloc8 00:24:42.705 Malloc9 00:24:42.705 Malloc10 00:24:42.705 21:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.705 21:28:17 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:42.705 21:28:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:42.705 21:28:17 -- common/autotest_common.sh@10 -- # set +x 00:24:42.705 21:28:17 -- target/shutdown.sh@124 -- # perfpid=1775820 00:24:42.705 21:28:17 -- target/shutdown.sh@125 -- # waitforlisten 1775820 /var/tmp/bdevperf.sock 00:24:42.705 21:28:17 -- common/autotest_common.sh@819 -- # '[' -z 1775820 ']' 00:24:42.705 21:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.705 21:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:42.705 21:28:17 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:42.705 21:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.705 21:28:17 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:42.705 21:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:42.705 21:28:17 -- common/autotest_common.sh@10 -- # set +x 00:24:42.705 21:28:17 -- nvmf/common.sh@520 -- # config=() 00:24:42.705 21:28:17 -- nvmf/common.sh@520 -- # local subsystem config 00:24:42.705 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.705 { 00:24:42.705 "params": { 00:24:42.705 "name": "Nvme$subsystem", 00:24:42.705 "trtype": "$TEST_TRANSPORT", 00:24:42.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.705 "adrfam": "ipv4", 00:24:42.705 "trsvcid": "$NVMF_PORT", 00:24:42.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.705 "hdgst": ${hdgst:-false}, 00:24:42.705 "ddgst": ${ddgst:-false} 00:24:42.705 }, 00:24:42.705 "method": "bdev_nvme_attach_controller" 00:24:42.705 } 00:24:42.705 EOF 00:24:42.705 )") 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.705 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.705 { 00:24:42.705 "params": { 00:24:42.705 "name": "Nvme$subsystem", 00:24:42.705 "trtype": "$TEST_TRANSPORT", 00:24:42.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.705 "adrfam": "ipv4", 00:24:42.705 "trsvcid": "$NVMF_PORT", 00:24:42.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.705 "hdgst": ${hdgst:-false}, 00:24:42.705 "ddgst": ${ddgst:-false} 00:24:42.705 }, 00:24:42.705 "method": "bdev_nvme_attach_controller" 00:24:42.705 } 00:24:42.705 EOF 00:24:42.705 )") 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.705 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.705 { 00:24:42.705 "params": { 00:24:42.705 "name": "Nvme$subsystem", 00:24:42.705 "trtype": "$TEST_TRANSPORT", 00:24:42.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.705 "adrfam": "ipv4", 00:24:42.705 "trsvcid": "$NVMF_PORT", 00:24:42.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.705 "hdgst": ${hdgst:-false}, 00:24:42.705 "ddgst": ${ddgst:-false} 00:24:42.705 }, 00:24:42.705 "method": "bdev_nvme_attach_controller" 00:24:42.705 } 00:24:42.705 EOF 00:24:42.705 )") 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.705 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.705 { 00:24:42.705 "params": { 00:24:42.705 "name": "Nvme$subsystem", 00:24:42.705 "trtype": "$TEST_TRANSPORT", 00:24:42.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.705 "adrfam": "ipv4", 00:24:42.705 "trsvcid": "$NVMF_PORT", 00:24:42.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.705 "hdgst": ${hdgst:-false}, 00:24:42.705 "ddgst": ${ddgst:-false} 00:24:42.705 }, 00:24:42.705 "method": "bdev_nvme_attach_controller" 00:24:42.705 } 00:24:42.705 EOF 00:24:42.705 )") 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.705 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.705 { 00:24:42.705 "params": { 00:24:42.705 "name": "Nvme$subsystem", 00:24:42.705 "trtype": "$TEST_TRANSPORT", 00:24:42.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.705 "adrfam": "ipv4", 00:24:42.705 "trsvcid": "$NVMF_PORT", 00:24:42.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.705 "hdgst": ${hdgst:-false}, 00:24:42.705 "ddgst": ${ddgst:-false} 00:24:42.705 }, 00:24:42.705 "method": "bdev_nvme_attach_controller" 00:24:42.705 } 00:24:42.705 EOF 00:24:42.705 )") 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.705 [2024-07-26 21:28:17.516931] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:42.705 [2024-07-26 21:28:17.516986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775820 ] 00:24:42.705 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.705 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.705 { 00:24:42.705 "params": { 00:24:42.705 "name": "Nvme$subsystem", 00:24:42.705 "trtype": "$TEST_TRANSPORT", 00:24:42.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.705 "adrfam": "ipv4", 00:24:42.705 "trsvcid": "$NVMF_PORT", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.706 "hdgst": ${hdgst:-false}, 00:24:42.706 "ddgst": ${ddgst:-false} 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 } 00:24:42.706 EOF 00:24:42.706 )") 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.706 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.706 { 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme$subsystem", 00:24:42.706 "trtype": "$TEST_TRANSPORT", 00:24:42.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "$NVMF_PORT", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.706 "hdgst": ${hdgst:-false}, 00:24:42.706 "ddgst": ${ddgst:-false} 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 } 00:24:42.706 EOF 00:24:42.706 )") 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.706 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.706 { 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme$subsystem", 00:24:42.706 "trtype": "$TEST_TRANSPORT", 00:24:42.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "$NVMF_PORT", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.706 "hdgst": ${hdgst:-false}, 00:24:42.706 "ddgst": ${ddgst:-false} 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 } 00:24:42.706 EOF 00:24:42.706 )") 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.706 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.706 { 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme$subsystem", 00:24:42.706 "trtype": "$TEST_TRANSPORT", 00:24:42.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "$NVMF_PORT", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.706 "hdgst": ${hdgst:-false}, 00:24:42.706 "ddgst": ${ddgst:-false} 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 } 00:24:42.706 EOF 00:24:42.706 )") 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.706 21:28:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.706 { 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme$subsystem", 00:24:42.706 "trtype": "$TEST_TRANSPORT", 00:24:42.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "$NVMF_PORT", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.706 "hdgst": ${hdgst:-false}, 00:24:42.706 "ddgst": ${ddgst:-false} 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 } 00:24:42.706 EOF 00:24:42.706 )") 00:24:42.706 21:28:17 -- nvmf/common.sh@542 -- # cat 00:24:42.706 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.706 21:28:17 -- nvmf/common.sh@544 -- # jq . 00:24:42.706 21:28:17 -- nvmf/common.sh@545 -- # IFS=, 00:24:42.706 21:28:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme1", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme2", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme3", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme4", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme5", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme6", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme7", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme8", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme9", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 },{ 00:24:42.706 "params": { 00:24:42.706 "name": "Nvme10", 00:24:42.706 "trtype": "rdma", 00:24:42.706 "traddr": "192.168.100.8", 00:24:42.706 "adrfam": "ipv4", 00:24:42.706 "trsvcid": "4420", 00:24:42.706 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:42.706 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:42.706 "hdgst": false, 00:24:42.706 "ddgst": false 00:24:42.706 }, 00:24:42.706 "method": "bdev_nvme_attach_controller" 00:24:42.706 }' 00:24:42.966 [2024-07-26 21:28:17.604576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.966 [2024-07-26 21:28:17.641279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.904 Running I/O for 10 seconds... 00:24:44.472 21:28:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:44.472 21:28:19 -- common/autotest_common.sh@852 -- # return 0 00:24:44.472 21:28:19 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:44.472 21:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.472 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:24:44.472 21:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.472 21:28:19 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.472 21:28:19 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:44.472 21:28:19 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:44.472 21:28:19 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:44.472 21:28:19 -- target/shutdown.sh@57 -- # local ret=1 00:24:44.472 21:28:19 -- target/shutdown.sh@58 -- # local i 00:24:44.472 21:28:19 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:44.473 21:28:19 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:44.473 21:28:19 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:44.473 21:28:19 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:44.473 21:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.473 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:24:44.473 21:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.473 21:28:19 -- target/shutdown.sh@60 -- # read_io_count=491 00:24:44.473 21:28:19 -- target/shutdown.sh@63 -- # '[' 491 -ge 100 ']' 00:24:44.473 21:28:19 -- target/shutdown.sh@64 -- # ret=0 00:24:44.473 21:28:19 -- target/shutdown.sh@65 -- # break 00:24:44.473 21:28:19 -- target/shutdown.sh@69 -- # return 0 00:24:44.473 21:28:19 -- target/shutdown.sh@134 -- # killprocess 1775498 00:24:44.473 21:28:19 -- common/autotest_common.sh@926 -- # '[' -z 1775498 ']' 00:24:44.473 21:28:19 -- common/autotest_common.sh@930 -- # kill -0 1775498 00:24:44.473 21:28:19 -- common/autotest_common.sh@931 -- # uname 00:24:44.473 21:28:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:44.473 21:28:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1775498 00:24:44.732 21:28:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:44.732 21:28:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:44.732 21:28:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1775498' 00:24:44.732 killing process with pid 1775498 00:24:44.732 21:28:19 -- common/autotest_common.sh@945 -- # kill 1775498 00:24:44.732 21:28:19 -- common/autotest_common.sh@950 -- # wait 1775498 00:24:44.991 21:28:19 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:44.991 21:28:19 -- target/shutdown.sh@138 -- # sleep 1 00:24:45.943 [2024-07-26 21:28:20.431645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x183e00 00:24:45.943 [2024-07-26 21:28:20.431693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.431728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.431749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.431770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.431792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.431812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.431833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.431853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.431875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003ebef40 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.431895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.431916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.431938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019270e80 len:0x10000 key:0x182900 00:24:45.943 [2024-07-26 21:28:20.431962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e3eb40 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.431983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.431995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.432004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.432049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x183e00 00:24:45.943 [2024-07-26 21:28:20.432070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e9ee40 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.432091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.432114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.432137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e7ed40 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.432158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e6ecc0 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.432178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019260e00 len:0x10000 key:0x182900 00:24:45.943 [2024-07-26 21:28:20.432220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.432240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.432301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.432321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.432361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003eaeec0 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.432380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003edf040 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.432402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e5ec40 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.432435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003eef0c0 len:0x10000 key:0x183400 00:24:45.943 [2024-07-26 21:28:20.432475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.432495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.432514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.432554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x181900 00:24:45.943 [2024-07-26 21:28:20.432593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x181500 00:24:45.943 [2024-07-26 21:28:20.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.432658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x181d00 00:24:45.943 [2024-07-26 21:28:20.432678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a7b000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b1c000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011973000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011994000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.943 [2024-07-26 21:28:20.432893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x184300 00:24:45.943 [2024-07-26 21:28:20.432901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.432911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.432920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.432930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.432938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.432948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.432968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.432977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.432987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fc0000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.432995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.433005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.433030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bc66 p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.435609] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019283280 was disconnected and freed. reset controller. 00:24:45.944 [2024-07-26 21:28:20.435702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.435739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.435783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.435816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.435853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.435885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.435928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.435960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.435997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000705f300 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b25f900 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x183400 00:24:45.944 [2024-07-26 21:28:20.436143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000700f080 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000009df300 len:0x10000 key:0x183700 00:24:45.944 [2024-07-26 21:28:20.436224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.436263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.436284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071cfe80 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070bf600 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b20f680 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002a7640 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.436533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000702f180 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.436571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002171c0 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.436590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000708f480 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b28fa80 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ef780 len:0x10000 key:0x183b00 00:24:45.944 [2024-07-26 21:28:20.436688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b23f800 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x184200 00:24:45.944 [2024-07-26 21:28:20.436727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e95e000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e97f000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001314c000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130e9000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130c8000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.436980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.436991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.437010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.437029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.437049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.437068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.437087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.437107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.944 [2024-07-26 21:28:20.437125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x184300 00:24:45.944 [2024-07-26 21:28:20.437134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.437144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.437153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.437164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f096000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.437174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.437185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0b7000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.437193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.437204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d8000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.437213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.437224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122dc000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.437233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.437243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122bb000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.437251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.437262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001229a000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.437271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:4e7e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.439662] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019283040 was disconnected and freed. reset controller. 00:24:45.945 [2024-07-26 21:28:20.439724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.439758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.439800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.439837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.439875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.439907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.439944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.439969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.439980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.439989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x183700 00:24:45.945 [2024-07-26 21:28:20.440026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183700 00:24:45.945 [2024-07-26 21:28:20.440047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194df780 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183700 00:24:45.945 [2024-07-26 21:28:20.440109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x183700 00:24:45.945 [2024-07-26 21:28:20.440273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005efe00 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000088ee80 len:0x10000 key:0x183700 00:24:45.945 [2024-07-26 21:28:20.440333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ff880 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183700 00:24:45.945 [2024-07-26 21:28:20.440552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:24:45.945 [2024-07-26 21:28:20.440595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x183a00 00:24:45.945 [2024-07-26 21:28:20.440639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.440853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.440861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.448173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.448241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.945 [2024-07-26 21:28:20.448285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184300 00:24:45.945 [2024-07-26 21:28:20.448320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc3c000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1b000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.448948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.448980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.449018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c189000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.449050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.449088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c168000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.449121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.449159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c147000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.449187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.449205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.449218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.449234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4b6000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.449248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.449264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4d7000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.449277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:5b3e p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.451792] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257880 was disconnected and freed. reset controller. 00:24:45.946 [2024-07-26 21:28:20.451888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:24:45.946 [2024-07-26 21:28:20.451930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.451983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x182a00 00:24:45.946 [2024-07-26 21:28:20.452020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:24:45.946 [2024-07-26 21:28:20.452090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182b00 00:24:45.946 [2024-07-26 21:28:20.452160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:24:45.946 [2024-07-26 21:28:20.452217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:24:45.946 [2024-07-26 21:28:20.452245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b9fd80 len:0x10000 key:0x182d00 00:24:45.946 [2024-07-26 21:28:20.452275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x182a00 00:24:45.946 [2024-07-26 21:28:20.452336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b8fd00 len:0x10000 key:0x182d00 00:24:45.946 [2024-07-26 21:28:20.452564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:24:45.946 [2024-07-26 21:28:20.452621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x182a00 00:24:45.946 [2024-07-26 21:28:20.452659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:24:45.946 [2024-07-26 21:28:20.452687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:24:45.946 [2024-07-26 21:28:20.452716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bafe00 len:0x10000 key:0x182d00 00:24:45.946 [2024-07-26 21:28:20.452744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x182b00 00:24:45.946 [2024-07-26 21:28:20.452773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:24:45.946 [2024-07-26 21:28:20.452829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x182a00 00:24:45.946 [2024-07-26 21:28:20.452857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fd00 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.452973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.452988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:24:45.946 [2024-07-26 21:28:20.453001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:24:45.946 [2024-07-26 21:28:20.453030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.453059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.453088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:24:45.946 [2024-07-26 21:28:20.453116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012405000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012426000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012447000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012468000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012489000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212f000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b03000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b24000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125b2000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.946 [2024-07-26 21:28:20.453509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012591000 len:0x10000 key:0x184300 00:24:45.946 [2024-07-26 21:28:20.453522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012570000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126fc000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecfa000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130a7000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013086000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d24b000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fb000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7b9000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c798000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.453919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c777000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.453934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:20d8 p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.456584] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257640 was disconnected and freed. reset controller. 00:24:45.947 [2024-07-26 21:28:20.456654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.456691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.456734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.456768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.456807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.456840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.456878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.456911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.456949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.456994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:24:45.947 [2024-07-26 21:28:20.457137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a2f800 len:0x10000 key:0x182d00 00:24:45.947 [2024-07-26 21:28:20.457630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a3f880 len:0x10000 key:0x182d00 00:24:45.947 [2024-07-26 21:28:20.457717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.457952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.457982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.457997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.458010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.458039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.458069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:24:45.947 [2024-07-26 21:28:20.458099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.458127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182e00 00:24:45.947 [2024-07-26 21:28:20.458156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013695000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.947 [2024-07-26 21:28:20.458579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x184300 00:24:45.947 [2024-07-26 21:28:20.458593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.458966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d47c000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.458979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:9fbe p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.460992] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257400 was disconnected and freed. reset controller. 00:24:45.948 [2024-07-26 21:28:20.461019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182f00 00:24:45.948 [2024-07-26 21:28:20.461034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.461186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.461216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a55fb80 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.461302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a53fa80 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.461539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.461596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.461630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:24:45.948 [2024-07-26 21:28:20.461719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:24:45.948 [2024-07-26 21:28:20.461778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.461867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x183100 00:24:45.948 [2024-07-26 21:28:20.461925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.461954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:24:45.948 [2024-07-26 21:28:20.461984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.461999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.462013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:24:45.948 [2024-07-26 21:28:20.462044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x183300 00:24:45.948 [2024-07-26 21:28:20.462074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.462104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.462133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001379d000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.462162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001377c000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.462190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001375b000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.462219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.948 [2024-07-26 21:28:20.462234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001373a000 len:0x10000 key:0x184300 00:24:45.948 [2024-07-26 21:28:20.462247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f40000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.462880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.462893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:dcde p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465333] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192571c0 was disconnected and freed. reset controller. 00:24:45.949 [2024-07-26 21:28:20.465390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.465424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.465498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.465567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.465652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.465722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.465791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183100 00:24:45.949 [2024-07-26 21:28:20.465861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.465945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.465983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.465999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x184000 00:24:45.949 [2024-07-26 21:28:20.466735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183d00 00:24:45.949 [2024-07-26 21:28:20.466793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.466822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.466854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b56b000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.466883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.949 [2024-07-26 21:28:20.466900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b54a000 len:0x10000 key:0x184300 00:24:45.949 [2024-07-26 21:28:20.466913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.466929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.466943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.466958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.466972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.466988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133bf000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001339e000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.467612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x184300 00:24:45.950 [2024-07-26 21:28:20.467655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:11ac p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.469897] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f80 was disconnected and freed. reset controller. 00:24:45.950 [2024-07-26 21:28:20.469953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.469986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.470269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.470339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.470415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af0f900 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.470495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.470552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeef800 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.470580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.470609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.470767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.470799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.470857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aabfc80 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.470884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.470913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.470941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.470969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.470985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.471002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.471032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.471061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.471091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183900 00:24:45.950 [2024-07-26 21:28:20.471125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.471155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183500 00:24:45.950 [2024-07-26 21:28:20.471184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.471214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183600 00:24:45.950 [2024-07-26 21:28:20.471243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.950 [2024-07-26 21:28:20.471259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.471273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183600 00:24:45.951 [2024-07-26 21:28:20.471302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183900 00:24:45.951 [2024-07-26 21:28:20.471331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77b000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb3000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012360000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135cf000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.471975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135ae000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.471989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.472005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.472018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.472035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.472048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.472064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.472077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.472093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.472108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.472124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.472138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.472153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x184300 00:24:45.951 [2024-07-26 21:28:20.472166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:b7e4 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.474609] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256d40 was disconnected and freed. reset controller. 00:24:45.951 [2024-07-26 21:28:20.474677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeaf600 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.474710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.474785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.474823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.474854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.474891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae2f200 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.474923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.474961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f800 len:0x10000 key:0x183c00 00:24:45.951 [2024-07-26 21:28:20.475068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0dfd80 len:0x10000 key:0x183c00 00:24:45.951 [2024-07-26 21:28:20.475098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5dff80 len:0x10000 key:0x183f00 00:24:45.951 [2024-07-26 21:28:20.475128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f900 len:0x10000 key:0x183c00 00:24:45.951 [2024-07-26 21:28:20.475190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5bfe80 len:0x10000 key:0x183f00 00:24:45.951 [2024-07-26 21:28:20.475218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae6f400 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.475276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae3f280 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.475306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09fb80 len:0x10000 key:0x183c00 00:24:45.951 [2024-07-26 21:28:20.475423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f780 len:0x10000 key:0x183c00 00:24:45.951 [2024-07-26 21:28:20.475484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0afc00 len:0x10000 key:0x183c00 00:24:45.951 [2024-07-26 21:28:20.475513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aebf680 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.475600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae8f500 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.475676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae5f380 len:0x10000 key:0x183500 00:24:45.951 [2024-07-26 21:28:20.475749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183800 00:24:45.951 [2024-07-26 21:28:20.475835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f980 len:0x10000 key:0x183c00 00:24:45.951 [2024-07-26 21:28:20.475863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.951 [2024-07-26 21:28:20.475878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aedf780 len:0x10000 key:0x183500 00:24:45.952 [2024-07-26 21:28:20.475892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.475907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183800 00:24:45.952 [2024-07-26 21:28:20.475920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.475935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f700 len:0x10000 key:0x183c00 00:24:45.952 [2024-07-26 21:28:20.475948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.475963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae4f300 len:0x10000 key:0x183500 00:24:45.952 [2024-07-26 21:28:20.475976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.475991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e478000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba72000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba51000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba30000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126ba000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c525000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c504000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e3000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c2000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a1000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c480000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.476771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01a000 len:0x10000 key:0x184300 00:24:45.952 [2024-07-26 21:28:20.476784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:7860 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479042] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256b00 was disconnected and freed. reset controller. 00:24:45.952 [2024-07-26 21:28:20.479069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183f00 00:24:45.952 [2024-07-26 21:28:20.479257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183f00 00:24:45.952 [2024-07-26 21:28:20.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183f00 00:24:45.952 [2024-07-26 21:28:20.479459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.479907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183f00 00:24:45.952 [2024-07-26 21:28:20.479963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.479979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.479991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.480007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8af600 len:0x10000 key:0x184400 00:24:45.952 [2024-07-26 21:28:20.480022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.480038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183200 00:24:45.952 [2024-07-26 21:28:20.480052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.480067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183f00 00:24:45.952 [2024-07-26 21:28:20.480080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.952 [2024-07-26 21:28:20.480096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184400 00:24:45.953 [2024-07-26 21:28:20.480252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183f00 00:24:45.953 [2024-07-26 21:28:20.480280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183200 00:24:45.953 [2024-07-26 21:28:20.480452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184400 00:24:45.953 [2024-07-26 21:28:20.480481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db93000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc82000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.480904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x184300 00:24:45.953 [2024-07-26 21:28:20.480917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25740 cdw0:d0eac000 sqhd:bf10 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.483483] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192568c0 was disconnected and freed. reset controller. 00:24:45.953 [2024-07-26 21:28:20.483551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.483567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:4ccc p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.483582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.483595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:4ccc p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.483613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.483661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:4ccc p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.483676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.483689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:4ccc p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.485659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.953 [2024-07-26 21:28:20.485702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:45.953 [2024-07-26 21:28:20.485734] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.953 [2024-07-26 21:28:20.485785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.485818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3ee4 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.485854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.485885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3ee4 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.485918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.485951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3ee4 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.485984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.486014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3ee4 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.488455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.953 [2024-07-26 21:28:20.488499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:45.953 [2024-07-26 21:28:20.488529] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.953 [2024-07-26 21:28:20.488575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.488608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:620e p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.488659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.488692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:620e p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.488726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.488757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:620e p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.488790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.488822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:620e p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.491215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.953 [2024-07-26 21:28:20.491233] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:45.953 [2024-07-26 21:28:20.491245] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.953 [2024-07-26 21:28:20.491263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.491277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:cdb2 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.491290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.491303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:cdb2 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.491316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.491330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:cdb2 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.491343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.491355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:cdb2 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.493416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.953 [2024-07-26 21:28:20.493457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:45.953 [2024-07-26 21:28:20.493488] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.953 [2024-07-26 21:28:20.493531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.493565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:069a p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.493598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.493640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:069a p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.493674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.493705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:069a p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.493739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.493771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:069a p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.496057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.953 [2024-07-26 21:28:20.496100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:45.953 [2024-07-26 21:28:20.496112] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.953 [2024-07-26 21:28:20.496130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.496146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:05b8 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.496160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.496173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:05b8 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.496186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.953 [2024-07-26 21:28:20.496199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:05b8 p:1 m:0 dnr:0 00:24:45.953 [2024-07-26 21:28:20.496213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.496225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:05b8 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.498464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.954 [2024-07-26 21:28:20.498504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.954 [2024-07-26 21:28:20.498534] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.498577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.498610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:0d50 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.498655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.498687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:0d50 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.498719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.498751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:0d50 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.498784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.498815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:0d50 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.501027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.954 [2024-07-26 21:28:20.501067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:45.954 [2024-07-26 21:28:20.501097] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.501140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.501172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:8eca p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.501205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.501235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:8eca p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.501268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.501306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:8eca p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.501339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.501371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:8eca p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.503410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.954 [2024-07-26 21:28:20.503449] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:45.954 [2024-07-26 21:28:20.503479] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.503523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.503555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3796 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.503588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.503619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3796 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.503664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.503696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3796 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.503728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.503760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:3796 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.505767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.954 [2024-07-26 21:28:20.505807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:45.954 [2024-07-26 21:28:20.505836] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.505881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.505914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:f5d8 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.505947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.505977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:f5d8 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.506010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.506041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:f5d8 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.506074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.954 [2024-07-26 21:28:20.506105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25740 cdw0:0 sqhd:f5d8 p:1 m:0 dnr:0 00:24:45.954 [2024-07-26 21:28:20.524668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.954 [2024-07-26 21:28:20.524690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:45.954 [2024-07-26 21:28:20.524704] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.954 [2024-07-26 21:28:20.533565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:45.954 [2024-07-26 21:28:20.533575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:45.954 [2024-07-26 21:28:20.533619] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533637] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533649] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533681] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533693] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533706] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533718] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.954 [2024-07-26 21:28:20.533803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:45.954 [2024-07-26 21:28:20.533815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:45.954 [2024-07-26 21:28:20.533825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:45.954 [2024-07-26 21:28:20.533839] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:45.954 [2024-07-26 21:28:20.535902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:45.954 task offset: 81280 on job bdev=Nvme1n1 fails 00:24:45.954 00:24:45.954 Latency(us) 00:24:45.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.954 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme1n1 ended in about 2.03 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme1n1 : 2.03 304.54 19.03 31.54 0.00 189775.50 42572.19 1167694.23 00:24:45.954 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme2n1 ended in about 2.03 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme2n1 : 2.03 309.33 19.33 31.52 0.00 186361.35 41523.61 1160983.35 00:24:45.954 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme3n1 ended in about 2.03 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme3n1 : 2.03 309.20 19.33 31.51 0.00 185790.70 42362.47 1154272.46 00:24:45.954 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme4n1 ended in about 2.03 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme4n1 : 2.03 310.55 19.41 31.50 0.00 184479.22 20132.66 1147561.57 00:24:45.954 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme5n1 ended in about 2.03 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme5n1 : 2.03 308.93 19.31 31.48 0.00 184990.17 44879.05 1140850.69 00:24:45.954 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme6n1 ended in about 2.03 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme6n1 : 2.03 308.80 19.30 31.47 0.00 184401.48 45717.91 1140850.69 00:24:45.954 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme7n1 ended in about 2.03 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme7n1 : 2.03 308.66 19.29 31.46 0.00 183839.31 46556.77 1134139.80 00:24:45.954 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme8n1 ended in about 2.04 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme8n1 : 2.04 308.54 19.28 31.44 0.00 183324.42 46347.06 1127428.92 00:24:45.954 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme9n1 ended in about 2.04 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme9n1 : 2.04 308.40 19.28 31.43 0.00 182978.18 45088.77 1120718.03 00:24:45.954 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.954 Job: Nvme10n1 ended in about 2.04 seconds with error 00:24:45.954 Verification LBA range: start 0x0 length 0x400 00:24:45.954 Nvme10n1 : 2.04 205.67 12.85 31.42 0.00 261183.13 44040.19 1120718.03 00:24:45.954 =================================================================================================================== 00:24:45.954 Total : 2982.62 186.41 314.77 0.00 190578.44 20132.66 1167694.23 00:24:45.954 [2024-07-26 21:28:20.556396] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:45.954 [2024-07-26 21:28:20.556415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:45.954 [2024-07-26 21:28:20.556431] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:45.954 [2024-07-26 21:28:20.566294] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.566367] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.566389] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:24:45.954 [2024-07-26 21:28:20.566463] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.566478] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.566489] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:24:45.954 [2024-07-26 21:28:20.566582] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.566596] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.566606] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:24:45.954 [2024-07-26 21:28:20.570038] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.570089] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.570116] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:24:45.954 [2024-07-26 21:28:20.570232] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.570267] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.570301] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:24:45.954 [2024-07-26 21:28:20.570368] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.570386] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.570397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:24:45.954 [2024-07-26 21:28:20.570480] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.570495] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.570505] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:24:45.954 [2024-07-26 21:28:20.571140] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.571159] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.571169] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:24:45.954 [2024-07-26 21:28:20.571265] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.571280] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.571290] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:24:45.954 [2024-07-26 21:28:20.571394] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.954 [2024-07-26 21:28:20.571408] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.954 [2024-07-26 21:28:20.571419] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:24:46.213 21:28:20 -- target/shutdown.sh@141 -- # kill -9 1775820 00:24:46.213 21:28:20 -- target/shutdown.sh@143 -- # stoptarget 00:24:46.214 21:28:20 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:46.214 21:28:20 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:46.214 21:28:20 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:46.214 21:28:20 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:46.214 21:28:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:46.214 21:28:20 -- nvmf/common.sh@116 -- # sync 00:24:46.214 21:28:20 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:46.214 21:28:20 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:46.214 21:28:20 -- nvmf/common.sh@119 -- # set +e 00:24:46.214 21:28:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:46.214 21:28:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:46.214 rmmod nvme_rdma 00:24:46.214 rmmod nvme_fabrics 00:24:46.214 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 1775820 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:24:46.214 21:28:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:46.214 21:28:20 -- nvmf/common.sh@123 -- # set -e 00:24:46.214 21:28:20 -- nvmf/common.sh@124 -- # return 0 00:24:46.214 21:28:20 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:46.214 21:28:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:46.214 21:28:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:46.214 00:24:46.214 real 0m5.272s 00:24:46.214 user 0m18.074s 00:24:46.214 sys 0m1.386s 00:24:46.214 21:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.214 21:28:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.214 ************************************ 00:24:46.214 END TEST nvmf_shutdown_tc3 00:24:46.214 ************************************ 00:24:46.214 21:28:20 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:46.214 00:24:46.214 real 0m26.485s 00:24:46.214 user 1m13.920s 00:24:46.214 sys 0m10.335s 00:24:46.214 21:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.214 21:28:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.214 ************************************ 00:24:46.214 END TEST nvmf_shutdown 00:24:46.214 ************************************ 00:24:46.214 21:28:20 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:46.214 21:28:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:46.214 21:28:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.214 21:28:21 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:46.214 21:28:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:46.214 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:24:46.214 21:28:21 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:46.214 21:28:21 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:46.214 21:28:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:46.214 21:28:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:46.214 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:24:46.214 ************************************ 00:24:46.214 START TEST nvmf_multicontroller 00:24:46.214 ************************************ 00:24:46.214 21:28:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:46.472 * Looking for test storage... 00:24:46.472 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:46.472 21:28:21 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.472 21:28:21 -- nvmf/common.sh@7 -- # uname -s 00:24:46.473 21:28:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.473 21:28:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.473 21:28:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.473 21:28:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.473 21:28:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.473 21:28:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.473 21:28:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.473 21:28:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.473 21:28:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.473 21:28:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.473 21:28:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:46.473 21:28:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:46.473 21:28:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.473 21:28:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.473 21:28:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.473 21:28:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:46.473 21:28:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.473 21:28:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.473 21:28:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.473 21:28:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- paths/export.sh@5 -- # export PATH 00:24:46.473 21:28:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- nvmf/common.sh@46 -- # : 0 00:24:46.473 21:28:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:46.473 21:28:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:46.473 21:28:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:46.473 21:28:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.473 21:28:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.473 21:28:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:46.473 21:28:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:46.473 21:28:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:46.473 21:28:21 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:46.473 21:28:21 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:46.473 21:28:21 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:46.473 21:28:21 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:46.473 21:28:21 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.473 21:28:21 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:24:46.473 21:28:21 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:46.473 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:46.473 21:28:21 -- host/multicontroller.sh@20 -- # exit 0 00:24:46.473 00:24:46.473 real 0m0.128s 00:24:46.473 user 0m0.057s 00:24:46.473 sys 0m0.082s 00:24:46.473 21:28:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.473 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:24:46.473 ************************************ 00:24:46.473 END TEST nvmf_multicontroller 00:24:46.473 ************************************ 00:24:46.473 21:28:21 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:46.473 21:28:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:46.473 21:28:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:46.473 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:24:46.473 ************************************ 00:24:46.473 START TEST nvmf_aer 00:24:46.473 ************************************ 00:24:46.473 21:28:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:46.473 * Looking for test storage... 00:24:46.473 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:46.473 21:28:21 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.473 21:28:21 -- nvmf/common.sh@7 -- # uname -s 00:24:46.473 21:28:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.473 21:28:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.473 21:28:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.473 21:28:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.473 21:28:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.473 21:28:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.473 21:28:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.473 21:28:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.473 21:28:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.473 21:28:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.473 21:28:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:46.473 21:28:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:46.473 21:28:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.473 21:28:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.473 21:28:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.473 21:28:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:46.473 21:28:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.473 21:28:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.473 21:28:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.473 21:28:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- paths/export.sh@5 -- # export PATH 00:24:46.473 21:28:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.473 21:28:21 -- nvmf/common.sh@46 -- # : 0 00:24:46.473 21:28:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:46.473 21:28:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:46.473 21:28:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:46.473 21:28:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.473 21:28:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.473 21:28:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:46.473 21:28:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:46.473 21:28:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:46.473 21:28:21 -- host/aer.sh@11 -- # nvmftestinit 00:24:46.473 21:28:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:46.473 21:28:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.473 21:28:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:46.473 21:28:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:46.473 21:28:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:46.473 21:28:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.473 21:28:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.473 21:28:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.731 21:28:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:46.731 21:28:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:46.731 21:28:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:46.731 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:24:54.852 21:28:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:54.852 21:28:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:54.852 21:28:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:54.852 21:28:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:54.852 21:28:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:54.852 21:28:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:54.852 21:28:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:54.852 21:28:29 -- nvmf/common.sh@294 -- # net_devs=() 00:24:54.852 21:28:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:54.852 21:28:29 -- nvmf/common.sh@295 -- # e810=() 00:24:54.852 21:28:29 -- nvmf/common.sh@295 -- # local -ga e810 00:24:54.853 21:28:29 -- nvmf/common.sh@296 -- # x722=() 00:24:54.853 21:28:29 -- nvmf/common.sh@296 -- # local -ga x722 00:24:54.853 21:28:29 -- nvmf/common.sh@297 -- # mlx=() 00:24:54.853 21:28:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:54.853 21:28:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.853 21:28:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:54.853 21:28:29 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:54.853 21:28:29 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:54.853 21:28:29 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:54.853 21:28:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:54.853 21:28:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:54.853 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:54.853 21:28:29 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.853 21:28:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:54.853 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:54.853 21:28:29 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.853 21:28:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:54.853 21:28:29 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.853 21:28:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.853 21:28:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.853 21:28:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:54.853 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.853 21:28:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.853 21:28:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.853 21:28:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.853 21:28:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:54.853 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.853 21:28:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:54.853 21:28:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:54.853 21:28:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:54.853 21:28:29 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:54.853 21:28:29 -- nvmf/common.sh@57 -- # uname 00:24:54.853 21:28:29 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:54.853 21:28:29 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:54.853 21:28:29 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:54.853 21:28:29 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:54.853 21:28:29 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:54.853 21:28:29 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:54.853 21:28:29 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:54.853 21:28:29 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:54.853 21:28:29 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:54.853 21:28:29 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:54.853 21:28:29 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:54.853 21:28:29 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.853 21:28:29 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:54.853 21:28:29 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:54.853 21:28:29 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.853 21:28:29 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:54.853 21:28:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@104 -- # continue 2 00:24:54.853 21:28:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@104 -- # continue 2 00:24:54.853 21:28:29 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:54.853 21:28:29 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.853 21:28:29 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:54.853 21:28:29 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:54.853 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.853 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:54.853 altname enp217s0f0np0 00:24:54.853 altname ens818f0np0 00:24:54.853 inet 192.168.100.8/24 scope global mlx_0_0 00:24:54.853 valid_lft forever preferred_lft forever 00:24:54.853 21:28:29 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:54.853 21:28:29 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.853 21:28:29 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:54.853 21:28:29 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:54.853 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.853 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:54.853 altname enp217s0f1np1 00:24:54.853 altname ens818f1np1 00:24:54.853 inet 192.168.100.9/24 scope global mlx_0_1 00:24:54.853 valid_lft forever preferred_lft forever 00:24:54.853 21:28:29 -- nvmf/common.sh@410 -- # return 0 00:24:54.853 21:28:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:54.853 21:28:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:54.853 21:28:29 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:54.853 21:28:29 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:54.853 21:28:29 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.853 21:28:29 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:54.853 21:28:29 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:54.853 21:28:29 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.853 21:28:29 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:54.853 21:28:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@104 -- # continue 2 00:24:54.853 21:28:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.853 21:28:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.853 21:28:29 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@104 -- # continue 2 00:24:54.853 21:28:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:54.853 21:28:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.853 21:28:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:54.853 21:28:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:54.853 21:28:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:54.854 21:28:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:54.854 21:28:29 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:54.854 192.168.100.9' 00:24:54.854 21:28:29 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:54.854 192.168.100.9' 00:24:54.854 21:28:29 -- nvmf/common.sh@445 -- # head -n 1 00:24:54.854 21:28:29 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:54.854 21:28:29 -- nvmf/common.sh@446 -- # head -n 1 00:24:54.854 21:28:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:54.854 192.168.100.9' 00:24:54.854 21:28:29 -- nvmf/common.sh@446 -- # tail -n +2 00:24:54.854 21:28:29 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:54.854 21:28:29 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:54.854 21:28:29 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:54.854 21:28:29 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:54.854 21:28:29 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:54.854 21:28:29 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:54.854 21:28:29 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:54.854 21:28:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:54.854 21:28:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:54.854 21:28:29 -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 21:28:29 -- nvmf/common.sh@469 -- # nvmfpid=1780651 00:24:54.854 21:28:29 -- nvmf/common.sh@470 -- # waitforlisten 1780651 00:24:54.854 21:28:29 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:54.854 21:28:29 -- common/autotest_common.sh@819 -- # '[' -z 1780651 ']' 00:24:54.854 21:28:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.854 21:28:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:54.854 21:28:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.854 21:28:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:54.854 21:28:29 -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 [2024-07-26 21:28:29.648270] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:54.854 [2024-07-26 21:28:29.648322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.854 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.113 [2024-07-26 21:28:29.733679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.113 [2024-07-26 21:28:29.773037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:55.113 [2024-07-26 21:28:29.773143] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.113 [2024-07-26 21:28:29.773155] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.113 [2024-07-26 21:28:29.773165] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.113 [2024-07-26 21:28:29.773208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.113 [2024-07-26 21:28:29.773302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.113 [2024-07-26 21:28:29.773397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.113 [2024-07-26 21:28:29.773399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.682 21:28:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:55.682 21:28:30 -- common/autotest_common.sh@852 -- # return 0 00:24:55.682 21:28:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:55.682 21:28:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.682 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:55.682 21:28:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.682 21:28:30 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:55.682 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.682 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:55.682 [2024-07-26 21:28:30.520045] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ae5060/0x1ae9550) succeed. 00:24:55.682 [2024-07-26 21:28:30.530361] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ae6650/0x1b2abe0) succeed. 00:24:55.941 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.941 21:28:30 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:55.941 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.941 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 Malloc0 00:24:55.941 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.941 21:28:30 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:55.941 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.941 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.941 21:28:30 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.941 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.941 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.941 21:28:30 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:55.941 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.941 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 [2024-07-26 21:28:30.696286] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:55.941 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.941 21:28:30 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:55.941 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.941 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 [2024-07-26 21:28:30.703981] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:55.941 [ 00:24:55.941 { 00:24:55.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:55.941 "subtype": "Discovery", 00:24:55.941 "listen_addresses": [], 00:24:55.941 "allow_any_host": true, 00:24:55.941 "hosts": [] 00:24:55.941 }, 00:24:55.941 { 00:24:55.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.941 "subtype": "NVMe", 00:24:55.941 "listen_addresses": [ 00:24:55.941 { 00:24:55.941 "transport": "RDMA", 00:24:55.941 "trtype": "RDMA", 00:24:55.941 "adrfam": "IPv4", 00:24:55.941 "traddr": "192.168.100.8", 00:24:55.941 "trsvcid": "4420" 00:24:55.941 } 00:24:55.941 ], 00:24:55.941 "allow_any_host": true, 00:24:55.941 "hosts": [], 00:24:55.941 "serial_number": "SPDK00000000000001", 00:24:55.941 "model_number": "SPDK bdev Controller", 00:24:55.941 "max_namespaces": 2, 00:24:55.941 "min_cntlid": 1, 00:24:55.941 "max_cntlid": 65519, 00:24:55.941 "namespaces": [ 00:24:55.941 { 00:24:55.941 "nsid": 1, 00:24:55.941 "bdev_name": "Malloc0", 00:24:55.941 "name": "Malloc0", 00:24:55.941 "nguid": "1A34149174E84FEA901C30D338FAAEE9", 00:24:55.941 "uuid": "1a341491-74e8-4fea-901c-30d338faaee9" 00:24:55.941 } 00:24:55.941 ] 00:24:55.941 } 00:24:55.941 ] 00:24:55.941 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.941 21:28:30 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:55.941 21:28:30 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:55.941 21:28:30 -- host/aer.sh@33 -- # aerpid=1780768 00:24:55.941 21:28:30 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:55.941 21:28:30 -- common/autotest_common.sh@1244 -- # local i=0 00:24:55.941 21:28:30 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:55.941 21:28:30 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:55.941 21:28:30 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:24:55.941 21:28:30 -- common/autotest_common.sh@1247 -- # i=1 00:24:55.941 21:28:30 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:55.941 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.201 21:28:30 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:56.201 21:28:30 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:24:56.201 21:28:30 -- common/autotest_common.sh@1247 -- # i=2 00:24:56.201 21:28:30 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:56.201 21:28:30 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:56.201 21:28:30 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:56.201 21:28:30 -- common/autotest_common.sh@1255 -- # return 0 00:24:56.201 21:28:30 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:56.201 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.201 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:56.201 Malloc1 00:24:56.201 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.201 21:28:30 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:56.201 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.201 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:56.201 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.201 21:28:30 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:56.201 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.201 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:24:56.201 [ 00:24:56.201 { 00:24:56.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:56.201 "subtype": "Discovery", 00:24:56.201 "listen_addresses": [], 00:24:56.201 "allow_any_host": true, 00:24:56.201 "hosts": [] 00:24:56.201 }, 00:24:56.201 { 00:24:56.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.201 "subtype": "NVMe", 00:24:56.201 "listen_addresses": [ 00:24:56.201 { 00:24:56.201 "transport": "RDMA", 00:24:56.201 "trtype": "RDMA", 00:24:56.201 "adrfam": "IPv4", 00:24:56.201 "traddr": "192.168.100.8", 00:24:56.201 "trsvcid": "4420" 00:24:56.201 } 00:24:56.201 ], 00:24:56.201 "allow_any_host": true, 00:24:56.201 "hosts": [], 00:24:56.201 "serial_number": "SPDK00000000000001", 00:24:56.201 "model_number": "SPDK bdev Controller", 00:24:56.201 "max_namespaces": 2, 00:24:56.201 "min_cntlid": 1, 00:24:56.201 "max_cntlid": 65519, 00:24:56.201 "namespaces": [ 00:24:56.201 { 00:24:56.201 "nsid": 1, 00:24:56.201 "bdev_name": "Malloc0", 00:24:56.201 "name": "Malloc0", 00:24:56.201 "nguid": "1A34149174E84FEA901C30D338FAAEE9", 00:24:56.201 "uuid": "1a341491-74e8-4fea-901c-30d338faaee9" 00:24:56.201 }, 00:24:56.201 { 00:24:56.201 "nsid": 2, 00:24:56.201 "bdev_name": "Malloc1", 00:24:56.201 "name": "Malloc1", 00:24:56.201 "nguid": "D5CD420FFA5A4BD19F598B85DBF7F54C", 00:24:56.201 "uuid": "d5cd420f-fa5a-4bd1-9f59-8b85dbf7f54c" 00:24:56.201 } 00:24:56.201 ] 00:24:56.201 } 00:24:56.201 ] 00:24:56.201 21:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.201 21:28:31 -- host/aer.sh@43 -- # wait 1780768 00:24:56.201 Asynchronous Event Request test 00:24:56.201 Attaching to 192.168.100.8 00:24:56.201 Attached to 192.168.100.8 00:24:56.201 Registering asynchronous event callbacks... 00:24:56.201 Starting namespace attribute notice tests for all controllers... 00:24:56.201 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:56.201 aer_cb - Changed Namespace 00:24:56.201 Cleaning up... 00:24:56.201 21:28:31 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:56.201 21:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.201 21:28:31 -- common/autotest_common.sh@10 -- # set +x 00:24:56.201 21:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.201 21:28:31 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:56.201 21:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.201 21:28:31 -- common/autotest_common.sh@10 -- # set +x 00:24:56.461 21:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.461 21:28:31 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.461 21:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.461 21:28:31 -- common/autotest_common.sh@10 -- # set +x 00:24:56.461 21:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.461 21:28:31 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:56.461 21:28:31 -- host/aer.sh@51 -- # nvmftestfini 00:24:56.461 21:28:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:56.461 21:28:31 -- nvmf/common.sh@116 -- # sync 00:24:56.461 21:28:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:56.461 21:28:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:56.461 21:28:31 -- nvmf/common.sh@119 -- # set +e 00:24:56.461 21:28:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:56.461 21:28:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:56.461 rmmod nvme_rdma 00:24:56.461 rmmod nvme_fabrics 00:24:56.461 21:28:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:56.461 21:28:31 -- nvmf/common.sh@123 -- # set -e 00:24:56.461 21:28:31 -- nvmf/common.sh@124 -- # return 0 00:24:56.461 21:28:31 -- nvmf/common.sh@477 -- # '[' -n 1780651 ']' 00:24:56.461 21:28:31 -- nvmf/common.sh@478 -- # killprocess 1780651 00:24:56.461 21:28:31 -- common/autotest_common.sh@926 -- # '[' -z 1780651 ']' 00:24:56.461 21:28:31 -- common/autotest_common.sh@930 -- # kill -0 1780651 00:24:56.461 21:28:31 -- common/autotest_common.sh@931 -- # uname 00:24:56.461 21:28:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:56.461 21:28:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1780651 00:24:56.461 21:28:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:56.461 21:28:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:56.461 21:28:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1780651' 00:24:56.461 killing process with pid 1780651 00:24:56.461 21:28:31 -- common/autotest_common.sh@945 -- # kill 1780651 00:24:56.461 [2024-07-26 21:28:31.191436] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:56.461 21:28:31 -- common/autotest_common.sh@950 -- # wait 1780651 00:24:56.721 21:28:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:56.721 21:28:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:56.721 00:24:56.721 real 0m10.225s 00:24:56.721 user 0m8.855s 00:24:56.721 sys 0m6.849s 00:24:56.721 21:28:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.721 21:28:31 -- common/autotest_common.sh@10 -- # set +x 00:24:56.721 ************************************ 00:24:56.721 END TEST nvmf_aer 00:24:56.721 ************************************ 00:24:56.721 21:28:31 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:56.721 21:28:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:56.721 21:28:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:56.721 21:28:31 -- common/autotest_common.sh@10 -- # set +x 00:24:56.721 ************************************ 00:24:56.721 START TEST nvmf_async_init 00:24:56.721 ************************************ 00:24:56.721 21:28:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:56.721 * Looking for test storage... 00:24:56.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:56.721 21:28:31 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.721 21:28:31 -- nvmf/common.sh@7 -- # uname -s 00:24:56.981 21:28:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.981 21:28:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.981 21:28:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.981 21:28:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.981 21:28:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.981 21:28:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.981 21:28:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.981 21:28:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.981 21:28:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.981 21:28:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.981 21:28:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:56.981 21:28:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:56.981 21:28:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.981 21:28:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.981 21:28:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.981 21:28:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:56.981 21:28:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.981 21:28:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.981 21:28:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.981 21:28:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.981 21:28:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.981 21:28:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.981 21:28:31 -- paths/export.sh@5 -- # export PATH 00:24:56.981 21:28:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.981 21:28:31 -- nvmf/common.sh@46 -- # : 0 00:24:56.981 21:28:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:56.981 21:28:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:56.981 21:28:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:56.981 21:28:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.981 21:28:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.981 21:28:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:56.981 21:28:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:56.981 21:28:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:56.981 21:28:31 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:56.981 21:28:31 -- host/async_init.sh@14 -- # null_block_size=512 00:24:56.981 21:28:31 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:56.981 21:28:31 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:56.981 21:28:31 -- host/async_init.sh@20 -- # uuidgen 00:24:56.981 21:28:31 -- host/async_init.sh@20 -- # tr -d - 00:24:56.981 21:28:31 -- host/async_init.sh@20 -- # nguid=d77d1ccd9d0e4d0099c76b0543d2c98e 00:24:56.981 21:28:31 -- host/async_init.sh@22 -- # nvmftestinit 00:24:56.981 21:28:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:56.981 21:28:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.981 21:28:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:56.982 21:28:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:56.982 21:28:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:56.982 21:28:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.982 21:28:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.982 21:28:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.982 21:28:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:56.982 21:28:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:56.982 21:28:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:56.982 21:28:31 -- common/autotest_common.sh@10 -- # set +x 00:25:05.136 21:28:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:05.136 21:28:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:05.136 21:28:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:05.136 21:28:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:05.136 21:28:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:05.136 21:28:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:05.136 21:28:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:05.136 21:28:39 -- nvmf/common.sh@294 -- # net_devs=() 00:25:05.136 21:28:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:05.136 21:28:39 -- nvmf/common.sh@295 -- # e810=() 00:25:05.136 21:28:39 -- nvmf/common.sh@295 -- # local -ga e810 00:25:05.136 21:28:39 -- nvmf/common.sh@296 -- # x722=() 00:25:05.136 21:28:39 -- nvmf/common.sh@296 -- # local -ga x722 00:25:05.136 21:28:39 -- nvmf/common.sh@297 -- # mlx=() 00:25:05.136 21:28:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:05.136 21:28:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.136 21:28:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:05.137 21:28:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:05.137 21:28:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:05.137 21:28:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:05.137 21:28:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:05.137 21:28:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:05.137 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:05.137 21:28:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.137 21:28:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:05.137 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:05.137 21:28:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.137 21:28:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:05.137 21:28:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.137 21:28:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.137 21:28:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.137 21:28:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:05.137 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.137 21:28:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.137 21:28:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.137 21:28:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.137 21:28:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:05.137 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.137 21:28:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:05.137 21:28:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:05.137 21:28:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:05.137 21:28:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:05.137 21:28:39 -- nvmf/common.sh@57 -- # uname 00:25:05.137 21:28:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:05.137 21:28:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:05.137 21:28:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:05.137 21:28:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:05.137 21:28:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:05.137 21:28:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:05.137 21:28:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:05.137 21:28:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:05.137 21:28:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:05.137 21:28:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:05.137 21:28:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:05.137 21:28:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.137 21:28:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:05.137 21:28:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:05.137 21:28:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.137 21:28:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:05.137 21:28:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@104 -- # continue 2 00:25:05.137 21:28:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@104 -- # continue 2 00:25:05.137 21:28:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:05.137 21:28:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.137 21:28:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:05.137 21:28:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:05.137 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.137 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:05.137 altname enp217s0f0np0 00:25:05.137 altname ens818f0np0 00:25:05.137 inet 192.168.100.8/24 scope global mlx_0_0 00:25:05.137 valid_lft forever preferred_lft forever 00:25:05.137 21:28:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:05.137 21:28:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.137 21:28:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:05.137 21:28:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:05.137 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.137 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:05.137 altname enp217s0f1np1 00:25:05.137 altname ens818f1np1 00:25:05.137 inet 192.168.100.9/24 scope global mlx_0_1 00:25:05.137 valid_lft forever preferred_lft forever 00:25:05.137 21:28:39 -- nvmf/common.sh@410 -- # return 0 00:25:05.137 21:28:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:05.137 21:28:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:05.137 21:28:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:05.137 21:28:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:05.137 21:28:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.137 21:28:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:05.137 21:28:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:05.137 21:28:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.137 21:28:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:05.137 21:28:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@104 -- # continue 2 00:25:05.137 21:28:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.137 21:28:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.137 21:28:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@104 -- # continue 2 00:25:05.137 21:28:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:05.137 21:28:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.137 21:28:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:05.137 21:28:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.137 21:28:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.137 21:28:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:05.137 192.168.100.9' 00:25:05.137 21:28:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:05.137 192.168.100.9' 00:25:05.137 21:28:39 -- nvmf/common.sh@445 -- # head -n 1 00:25:05.137 21:28:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:05.137 21:28:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:05.137 192.168.100.9' 00:25:05.137 21:28:39 -- nvmf/common.sh@446 -- # tail -n +2 00:25:05.137 21:28:39 -- nvmf/common.sh@446 -- # head -n 1 00:25:05.137 21:28:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:05.137 21:28:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:05.137 21:28:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:05.137 21:28:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:05.137 21:28:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:05.138 21:28:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:05.138 21:28:39 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:05.138 21:28:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:05.138 21:28:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:05.138 21:28:39 -- common/autotest_common.sh@10 -- # set +x 00:25:05.138 21:28:39 -- nvmf/common.sh@469 -- # nvmfpid=1784874 00:25:05.138 21:28:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:05.138 21:28:39 -- nvmf/common.sh@470 -- # waitforlisten 1784874 00:25:05.138 21:28:39 -- common/autotest_common.sh@819 -- # '[' -z 1784874 ']' 00:25:05.138 21:28:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.138 21:28:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:05.138 21:28:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.138 21:28:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:05.138 21:28:39 -- common/autotest_common.sh@10 -- # set +x 00:25:05.138 [2024-07-26 21:28:39.870227] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:05.138 [2024-07-26 21:28:39.870276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.138 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.138 [2024-07-26 21:28:39.953982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.138 [2024-07-26 21:28:39.991121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:05.138 [2024-07-26 21:28:39.991229] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.138 [2024-07-26 21:28:39.991238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.138 [2024-07-26 21:28:39.991248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.138 [2024-07-26 21:28:39.991273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.072 21:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:06.072 21:28:40 -- common/autotest_common.sh@852 -- # return 0 00:25:06.072 21:28:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:06.072 21:28:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 21:28:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.072 21:28:40 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 [2024-07-26 21:28:40.733117] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13dbdb0/0x13e02a0) succeed. 00:25:06.072 [2024-07-26 21:28:40.742761] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13dd2b0/0x1421930) succeed. 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 null0 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d77d1ccd9d0e4d0099c76b0543d2c98e 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 [2024-07-26 21:28:40.830207] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 nvme0n1 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.072 [ 00:25:06.072 { 00:25:06.072 "name": "nvme0n1", 00:25:06.072 "aliases": [ 00:25:06.072 "d77d1ccd-9d0e-4d00-99c7-6b0543d2c98e" 00:25:06.072 ], 00:25:06.072 "product_name": "NVMe disk", 00:25:06.072 "block_size": 512, 00:25:06.072 "num_blocks": 2097152, 00:25:06.072 "uuid": "d77d1ccd-9d0e-4d00-99c7-6b0543d2c98e", 00:25:06.072 "assigned_rate_limits": { 00:25:06.072 "rw_ios_per_sec": 0, 00:25:06.072 "rw_mbytes_per_sec": 0, 00:25:06.072 "r_mbytes_per_sec": 0, 00:25:06.072 "w_mbytes_per_sec": 0 00:25:06.072 }, 00:25:06.072 "claimed": false, 00:25:06.072 "zoned": false, 00:25:06.072 "supported_io_types": { 00:25:06.072 "read": true, 00:25:06.072 "write": true, 00:25:06.072 "unmap": false, 00:25:06.072 "write_zeroes": true, 00:25:06.072 "flush": true, 00:25:06.072 "reset": true, 00:25:06.072 "compare": true, 00:25:06.072 "compare_and_write": true, 00:25:06.072 "abort": true, 00:25:06.072 "nvme_admin": true, 00:25:06.072 "nvme_io": true 00:25:06.072 }, 00:25:06.072 "memory_domains": [ 00:25:06.072 { 00:25:06.072 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:06.072 "dma_device_type": 0 00:25:06.072 } 00:25:06.072 ], 00:25:06.072 "driver_specific": { 00:25:06.072 "nvme": [ 00:25:06.072 { 00:25:06.072 "trid": { 00:25:06.072 "trtype": "RDMA", 00:25:06.072 "adrfam": "IPv4", 00:25:06.072 "traddr": "192.168.100.8", 00:25:06.072 "trsvcid": "4420", 00:25:06.072 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.072 }, 00:25:06.072 "ctrlr_data": { 00:25:06.072 "cntlid": 1, 00:25:06.072 "vendor_id": "0x8086", 00:25:06.072 "model_number": "SPDK bdev Controller", 00:25:06.072 "serial_number": "00000000000000000000", 00:25:06.072 "firmware_revision": "24.01.1", 00:25:06.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.072 "oacs": { 00:25:06.072 "security": 0, 00:25:06.072 "format": 0, 00:25:06.072 "firmware": 0, 00:25:06.072 "ns_manage": 0 00:25:06.072 }, 00:25:06.072 "multi_ctrlr": true, 00:25:06.072 "ana_reporting": false 00:25:06.072 }, 00:25:06.072 "vs": { 00:25:06.072 "nvme_version": "1.3" 00:25:06.072 }, 00:25:06.072 "ns_data": { 00:25:06.072 "id": 1, 00:25:06.072 "can_share": true 00:25:06.072 } 00:25:06.072 } 00:25:06.072 ], 00:25:06.072 "mp_policy": "active_passive" 00:25:06.072 } 00:25:06.072 } 00:25:06.072 ] 00:25:06.072 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.072 21:28:40 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:06.072 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.072 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.330 [2024-07-26 21:28:40.942892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:06.330 [2024-07-26 21:28:40.967119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:06.330 [2024-07-26 21:28:40.988467] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.330 21:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.330 21:28:40 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.330 21:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.330 21:28:40 -- common/autotest_common.sh@10 -- # set +x 00:25:06.330 [ 00:25:06.330 { 00:25:06.330 "name": "nvme0n1", 00:25:06.330 "aliases": [ 00:25:06.330 "d77d1ccd-9d0e-4d00-99c7-6b0543d2c98e" 00:25:06.330 ], 00:25:06.330 "product_name": "NVMe disk", 00:25:06.330 "block_size": 512, 00:25:06.330 "num_blocks": 2097152, 00:25:06.330 "uuid": "d77d1ccd-9d0e-4d00-99c7-6b0543d2c98e", 00:25:06.330 "assigned_rate_limits": { 00:25:06.330 "rw_ios_per_sec": 0, 00:25:06.330 "rw_mbytes_per_sec": 0, 00:25:06.330 "r_mbytes_per_sec": 0, 00:25:06.330 "w_mbytes_per_sec": 0 00:25:06.330 }, 00:25:06.330 "claimed": false, 00:25:06.330 "zoned": false, 00:25:06.330 "supported_io_types": { 00:25:06.330 "read": true, 00:25:06.330 "write": true, 00:25:06.330 "unmap": false, 00:25:06.330 "write_zeroes": true, 00:25:06.330 "flush": true, 00:25:06.330 "reset": true, 00:25:06.330 "compare": true, 00:25:06.330 "compare_and_write": true, 00:25:06.330 "abort": true, 00:25:06.330 "nvme_admin": true, 00:25:06.330 "nvme_io": true 00:25:06.330 }, 00:25:06.330 "memory_domains": [ 00:25:06.330 { 00:25:06.330 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:06.330 "dma_device_type": 0 00:25:06.330 } 00:25:06.330 ], 00:25:06.330 "driver_specific": { 00:25:06.330 "nvme": [ 00:25:06.330 { 00:25:06.330 "trid": { 00:25:06.330 "trtype": "RDMA", 00:25:06.330 "adrfam": "IPv4", 00:25:06.330 "traddr": "192.168.100.8", 00:25:06.330 "trsvcid": "4420", 00:25:06.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.330 }, 00:25:06.330 "ctrlr_data": { 00:25:06.330 "cntlid": 2, 00:25:06.330 "vendor_id": "0x8086", 00:25:06.330 "model_number": "SPDK bdev Controller", 00:25:06.330 "serial_number": "00000000000000000000", 00:25:06.330 "firmware_revision": "24.01.1", 00:25:06.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.330 "oacs": { 00:25:06.330 "security": 0, 00:25:06.330 "format": 0, 00:25:06.330 "firmware": 0, 00:25:06.330 "ns_manage": 0 00:25:06.330 }, 00:25:06.330 "multi_ctrlr": true, 00:25:06.330 "ana_reporting": false 00:25:06.330 }, 00:25:06.330 "vs": { 00:25:06.330 "nvme_version": "1.3" 00:25:06.330 }, 00:25:06.330 "ns_data": { 00:25:06.330 "id": 1, 00:25:06.330 "can_share": true 00:25:06.330 } 00:25:06.330 } 00:25:06.330 ], 00:25:06.330 "mp_policy": "active_passive" 00:25:06.330 } 00:25:06.330 } 00:25:06.330 ] 00:25:06.330 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.330 21:28:41 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.330 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.330 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.330 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.330 21:28:41 -- host/async_init.sh@53 -- # mktemp 00:25:06.330 21:28:41 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yIJfBvMFNo 00:25:06.330 21:28:41 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:06.330 21:28:41 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yIJfBvMFNo 00:25:06.330 21:28:41 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:06.330 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.330 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.330 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.330 21:28:41 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:25:06.330 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.330 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.330 [2024-07-26 21:28:41.071630] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:25:06.330 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.330 21:28:41 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yIJfBvMFNo 00:25:06.330 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.330 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.330 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.330 21:28:41 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yIJfBvMFNo 00:25:06.330 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.330 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.330 [2024-07-26 21:28:41.091662] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.330 nvme0n1 00:25:06.330 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.331 21:28:41 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.331 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.331 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.331 [ 00:25:06.331 { 00:25:06.331 "name": "nvme0n1", 00:25:06.331 "aliases": [ 00:25:06.331 "d77d1ccd-9d0e-4d00-99c7-6b0543d2c98e" 00:25:06.331 ], 00:25:06.331 "product_name": "NVMe disk", 00:25:06.331 "block_size": 512, 00:25:06.331 "num_blocks": 2097152, 00:25:06.331 "uuid": "d77d1ccd-9d0e-4d00-99c7-6b0543d2c98e", 00:25:06.331 "assigned_rate_limits": { 00:25:06.331 "rw_ios_per_sec": 0, 00:25:06.331 "rw_mbytes_per_sec": 0, 00:25:06.331 "r_mbytes_per_sec": 0, 00:25:06.331 "w_mbytes_per_sec": 0 00:25:06.331 }, 00:25:06.331 "claimed": false, 00:25:06.331 "zoned": false, 00:25:06.331 "supported_io_types": { 00:25:06.331 "read": true, 00:25:06.331 "write": true, 00:25:06.331 "unmap": false, 00:25:06.331 "write_zeroes": true, 00:25:06.331 "flush": true, 00:25:06.331 "reset": true, 00:25:06.331 "compare": true, 00:25:06.331 "compare_and_write": true, 00:25:06.331 "abort": true, 00:25:06.331 "nvme_admin": true, 00:25:06.331 "nvme_io": true 00:25:06.331 }, 00:25:06.331 "memory_domains": [ 00:25:06.331 { 00:25:06.331 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:06.331 "dma_device_type": 0 00:25:06.331 } 00:25:06.331 ], 00:25:06.331 "driver_specific": { 00:25:06.331 "nvme": [ 00:25:06.331 { 00:25:06.331 "trid": { 00:25:06.331 "trtype": "RDMA", 00:25:06.331 "adrfam": "IPv4", 00:25:06.331 "traddr": "192.168.100.8", 00:25:06.331 "trsvcid": "4421", 00:25:06.331 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.331 }, 00:25:06.331 "ctrlr_data": { 00:25:06.331 "cntlid": 3, 00:25:06.331 "vendor_id": "0x8086", 00:25:06.331 "model_number": "SPDK bdev Controller", 00:25:06.331 "serial_number": "00000000000000000000", 00:25:06.331 "firmware_revision": "24.01.1", 00:25:06.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.331 "oacs": { 00:25:06.331 "security": 0, 00:25:06.331 "format": 0, 00:25:06.331 "firmware": 0, 00:25:06.331 "ns_manage": 0 00:25:06.331 }, 00:25:06.331 "multi_ctrlr": true, 00:25:06.331 "ana_reporting": false 00:25:06.331 }, 00:25:06.331 "vs": { 00:25:06.331 "nvme_version": "1.3" 00:25:06.331 }, 00:25:06.331 "ns_data": { 00:25:06.331 "id": 1, 00:25:06.331 "can_share": true 00:25:06.331 } 00:25:06.331 } 00:25:06.331 ], 00:25:06.331 "mp_policy": "active_passive" 00:25:06.331 } 00:25:06.331 } 00:25:06.331 ] 00:25:06.331 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.331 21:28:41 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.331 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.331 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.589 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.589 21:28:41 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.yIJfBvMFNo 00:25:06.589 21:28:41 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:06.589 21:28:41 -- host/async_init.sh@78 -- # nvmftestfini 00:25:06.589 21:28:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:06.589 21:28:41 -- nvmf/common.sh@116 -- # sync 00:25:06.589 21:28:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:06.589 21:28:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:06.589 21:28:41 -- nvmf/common.sh@119 -- # set +e 00:25:06.589 21:28:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:06.589 21:28:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:06.589 rmmod nvme_rdma 00:25:06.589 rmmod nvme_fabrics 00:25:06.589 21:28:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:06.589 21:28:41 -- nvmf/common.sh@123 -- # set -e 00:25:06.589 21:28:41 -- nvmf/common.sh@124 -- # return 0 00:25:06.589 21:28:41 -- nvmf/common.sh@477 -- # '[' -n 1784874 ']' 00:25:06.589 21:28:41 -- nvmf/common.sh@478 -- # killprocess 1784874 00:25:06.589 21:28:41 -- common/autotest_common.sh@926 -- # '[' -z 1784874 ']' 00:25:06.589 21:28:41 -- common/autotest_common.sh@930 -- # kill -0 1784874 00:25:06.589 21:28:41 -- common/autotest_common.sh@931 -- # uname 00:25:06.589 21:28:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.589 21:28:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1784874 00:25:06.589 21:28:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:06.589 21:28:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:06.589 21:28:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1784874' 00:25:06.589 killing process with pid 1784874 00:25:06.589 21:28:41 -- common/autotest_common.sh@945 -- # kill 1784874 00:25:06.589 21:28:41 -- common/autotest_common.sh@950 -- # wait 1784874 00:25:06.848 21:28:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:06.848 21:28:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:06.848 00:25:06.848 real 0m10.058s 00:25:06.848 user 0m4.073s 00:25:06.848 sys 0m6.754s 00:25:06.848 21:28:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.848 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.848 ************************************ 00:25:06.848 END TEST nvmf_async_init 00:25:06.848 ************************************ 00:25:06.848 21:28:41 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:25:06.848 21:28:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:06.848 21:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.848 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:06.848 ************************************ 00:25:06.848 START TEST dma 00:25:06.848 ************************************ 00:25:06.848 21:28:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:25:06.848 * Looking for test storage... 00:25:06.848 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:06.848 21:28:41 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.848 21:28:41 -- nvmf/common.sh@7 -- # uname -s 00:25:06.848 21:28:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.848 21:28:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.848 21:28:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.848 21:28:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.848 21:28:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.848 21:28:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.848 21:28:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.848 21:28:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.848 21:28:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.848 21:28:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.848 21:28:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:06.848 21:28:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:06.848 21:28:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.848 21:28:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.848 21:28:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.848 21:28:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:06.848 21:28:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.848 21:28:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.848 21:28:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.848 21:28:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.848 21:28:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.848 21:28:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.848 21:28:41 -- paths/export.sh@5 -- # export PATH 00:25:06.848 21:28:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.848 21:28:41 -- nvmf/common.sh@46 -- # : 0 00:25:06.848 21:28:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:06.848 21:28:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:06.848 21:28:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:06.848 21:28:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.848 21:28:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.106 21:28:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:07.106 21:28:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:07.106 21:28:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:07.106 21:28:41 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:25:07.106 21:28:41 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:25:07.106 21:28:41 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:25:07.106 21:28:41 -- host/dma.sh@18 -- # subsystem=0 00:25:07.106 21:28:41 -- host/dma.sh@93 -- # nvmftestinit 00:25:07.106 21:28:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:07.106 21:28:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.106 21:28:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:07.106 21:28:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:07.106 21:28:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:07.106 21:28:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.106 21:28:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.106 21:28:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.106 21:28:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:07.106 21:28:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:07.106 21:28:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:07.106 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:15.218 21:28:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:15.218 21:28:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:15.218 21:28:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:15.218 21:28:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:15.218 21:28:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:15.218 21:28:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:15.218 21:28:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:15.218 21:28:49 -- nvmf/common.sh@294 -- # net_devs=() 00:25:15.218 21:28:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:15.218 21:28:49 -- nvmf/common.sh@295 -- # e810=() 00:25:15.218 21:28:49 -- nvmf/common.sh@295 -- # local -ga e810 00:25:15.218 21:28:49 -- nvmf/common.sh@296 -- # x722=() 00:25:15.218 21:28:49 -- nvmf/common.sh@296 -- # local -ga x722 00:25:15.218 21:28:49 -- nvmf/common.sh@297 -- # mlx=() 00:25:15.218 21:28:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:15.218 21:28:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.218 21:28:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.219 21:28:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:15.219 21:28:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:15.219 21:28:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:15.219 21:28:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:15.219 21:28:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:15.219 21:28:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:15.219 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:15.219 21:28:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:15.219 21:28:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:15.219 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:15.219 21:28:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:15.219 21:28:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:15.219 21:28:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.219 21:28:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:15.219 21:28:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.219 21:28:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:15.219 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.219 21:28:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.219 21:28:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:15.219 21:28:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.219 21:28:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:15.219 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.219 21:28:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:15.219 21:28:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:15.219 21:28:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:15.219 21:28:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:15.219 21:28:49 -- nvmf/common.sh@57 -- # uname 00:25:15.219 21:28:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:15.219 21:28:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:15.219 21:28:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:15.219 21:28:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:15.219 21:28:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:15.219 21:28:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:15.219 21:28:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:15.219 21:28:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:15.219 21:28:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:15.219 21:28:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:15.219 21:28:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:15.219 21:28:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:15.219 21:28:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:15.219 21:28:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:15.219 21:28:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:15.219 21:28:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:15.219 21:28:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@104 -- # continue 2 00:25:15.219 21:28:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@104 -- # continue 2 00:25:15.219 21:28:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:15.219 21:28:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.219 21:28:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:15.219 21:28:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:15.219 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:15.219 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:15.219 altname enp217s0f0np0 00:25:15.219 altname ens818f0np0 00:25:15.219 inet 192.168.100.8/24 scope global mlx_0_0 00:25:15.219 valid_lft forever preferred_lft forever 00:25:15.219 21:28:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:15.219 21:28:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.219 21:28:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:15.219 21:28:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:15.219 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:15.219 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:15.219 altname enp217s0f1np1 00:25:15.219 altname ens818f1np1 00:25:15.219 inet 192.168.100.9/24 scope global mlx_0_1 00:25:15.219 valid_lft forever preferred_lft forever 00:25:15.219 21:28:49 -- nvmf/common.sh@410 -- # return 0 00:25:15.219 21:28:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:15.219 21:28:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:15.219 21:28:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:15.219 21:28:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:15.219 21:28:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:15.219 21:28:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:15.219 21:28:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:15.219 21:28:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:15.219 21:28:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:15.219 21:28:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@104 -- # continue 2 00:25:15.219 21:28:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.219 21:28:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:15.219 21:28:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@104 -- # continue 2 00:25:15.219 21:28:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:15.219 21:28:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.219 21:28:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:15.219 21:28:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.219 21:28:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.219 21:28:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:15.219 192.168.100.9' 00:25:15.219 21:28:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:15.219 192.168.100.9' 00:25:15.219 21:28:49 -- nvmf/common.sh@445 -- # head -n 1 00:25:15.219 21:28:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:15.219 21:28:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:15.219 192.168.100.9' 00:25:15.219 21:28:49 -- nvmf/common.sh@446 -- # tail -n +2 00:25:15.219 21:28:49 -- nvmf/common.sh@446 -- # head -n 1 00:25:15.219 21:28:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:15.219 21:28:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:15.219 21:28:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:15.219 21:28:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:15.219 21:28:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:15.219 21:28:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:15.220 21:28:49 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:25:15.220 21:28:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:15.220 21:28:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:15.220 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:25:15.220 21:28:49 -- nvmf/common.sh@469 -- # nvmfpid=1789121 00:25:15.220 21:28:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:15.220 21:28:49 -- nvmf/common.sh@470 -- # waitforlisten 1789121 00:25:15.220 21:28:49 -- common/autotest_common.sh@819 -- # '[' -z 1789121 ']' 00:25:15.220 21:28:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.220 21:28:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:15.220 21:28:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.220 21:28:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:15.220 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:25:15.220 [2024-07-26 21:28:50.043979] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:15.220 [2024-07-26 21:28:50.044035] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.220 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.478 [2024-07-26 21:28:50.132595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:15.478 [2024-07-26 21:28:50.170958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:15.478 [2024-07-26 21:28:50.171060] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.478 [2024-07-26 21:28:50.171071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.478 [2024-07-26 21:28:50.171081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.478 [2024-07-26 21:28:50.171127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.478 [2024-07-26 21:28:50.171129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.045 21:28:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:16.045 21:28:50 -- common/autotest_common.sh@852 -- # return 0 00:25:16.045 21:28:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:16.045 21:28:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:16.045 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:25:16.045 21:28:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.045 21:28:50 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:16.045 21:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.045 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:25:16.045 [2024-07-26 21:28:50.914895] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x71a8b0/0x71eda0) succeed. 00:25:16.304 [2024-07-26 21:28:50.924050] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x71bdb0/0x760430) succeed. 00:25:16.304 21:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.304 21:28:50 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:25:16.304 21:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.304 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:25:16.304 Malloc0 00:25:16.304 21:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.304 21:28:51 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:16.304 21:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.304 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:25:16.304 21:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.304 21:28:51 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:16.304 21:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.304 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:25:16.304 21:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.304 21:28:51 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:16.304 21:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.304 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:25:16.304 [2024-07-26 21:28:51.074509] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:16.304 21:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.304 21:28:51 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:25:16.304 21:28:51 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:25:16.304 21:28:51 -- nvmf/common.sh@520 -- # config=() 00:25:16.304 21:28:51 -- nvmf/common.sh@520 -- # local subsystem config 00:25:16.304 21:28:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:16.304 21:28:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:16.304 { 00:25:16.304 "params": { 00:25:16.304 "name": "Nvme$subsystem", 00:25:16.304 "trtype": "$TEST_TRANSPORT", 00:25:16.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:16.304 "adrfam": "ipv4", 00:25:16.304 "trsvcid": "$NVMF_PORT", 00:25:16.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:16.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:16.304 "hdgst": ${hdgst:-false}, 00:25:16.304 "ddgst": ${ddgst:-false} 00:25:16.304 }, 00:25:16.304 "method": "bdev_nvme_attach_controller" 00:25:16.304 } 00:25:16.304 EOF 00:25:16.304 )") 00:25:16.304 21:28:51 -- nvmf/common.sh@542 -- # cat 00:25:16.304 21:28:51 -- nvmf/common.sh@544 -- # jq . 00:25:16.304 21:28:51 -- nvmf/common.sh@545 -- # IFS=, 00:25:16.304 21:28:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:16.304 "params": { 00:25:16.304 "name": "Nvme0", 00:25:16.304 "trtype": "rdma", 00:25:16.304 "traddr": "192.168.100.8", 00:25:16.304 "adrfam": "ipv4", 00:25:16.304 "trsvcid": "4420", 00:25:16.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:16.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:16.304 "hdgst": false, 00:25:16.304 "ddgst": false 00:25:16.304 }, 00:25:16.304 "method": "bdev_nvme_attach_controller" 00:25:16.304 }' 00:25:16.304 [2024-07-26 21:28:51.120902] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:16.304 [2024-07-26 21:28:51.120951] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789368 ] 00:25:16.305 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.564 [2024-07-26 21:28:51.204076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:16.564 [2024-07-26 21:28:51.241127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.564 [2024-07-26 21:28:51.241129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.842 bdev Nvme0n1 reports 1 memory domains 00:25:21.842 bdev Nvme0n1 supports RDMA memory domain 00:25:21.842 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:21.842 ========================================================================== 00:25:21.842 Latency [us] 00:25:21.842 IOPS MiB/s Average min max 00:25:21.842 Core 2: 22011.08 85.98 726.20 238.92 8534.41 00:25:21.842 Core 3: 22124.67 86.42 722.43 238.97 8459.86 00:25:21.842 ========================================================================== 00:25:21.842 Total : 44135.75 172.41 724.31 238.92 8534.41 00:25:21.842 00:25:21.842 Total operations: 220706, translate 220706 pull_push 0 memzero 0 00:25:21.842 21:28:56 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:25:21.842 21:28:56 -- host/dma.sh@107 -- # gen_malloc_json 00:25:21.842 21:28:56 -- host/dma.sh@21 -- # jq . 00:25:21.842 [2024-07-26 21:28:56.656496] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:21.842 [2024-07-26 21:28:56.656550] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790308 ] 00:25:21.842 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.101 [2024-07-26 21:28:56.738346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:22.101 [2024-07-26 21:28:56.774543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.101 [2024-07-26 21:28:56.774546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.375 bdev Malloc0 reports 1 memory domains 00:25:27.375 bdev Malloc0 doesn't support RDMA memory domain 00:25:27.375 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:27.375 ========================================================================== 00:25:27.375 Latency [us] 00:25:27.375 IOPS MiB/s Average min max 00:25:27.375 Core 2: 14668.18 57.30 1090.06 386.31 2122.04 00:25:27.375 Core 3: 14943.29 58.37 1069.97 438.11 1816.37 00:25:27.375 ========================================================================== 00:25:27.375 Total : 29611.47 115.67 1079.92 386.31 2122.04 00:25:27.375 00:25:27.375 Total operations: 148108, translate 0 pull_push 592432 memzero 0 00:25:27.375 21:29:02 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:25:27.375 21:29:02 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:25:27.375 21:29:02 -- host/dma.sh@48 -- # local subsystem=0 00:25:27.375 21:29:02 -- host/dma.sh@50 -- # jq . 00:25:27.375 Ignoring -M option 00:25:27.375 [2024-07-26 21:29:02.108314] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:27.375 [2024-07-26 21:29:02.108386] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791260 ] 00:25:27.375 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.375 [2024-07-26 21:29:02.189027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:27.375 [2024-07-26 21:29:02.223048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.375 [2024-07-26 21:29:02.223051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.633 [2024-07-26 21:29:02.435537] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:32.932 [2024-07-26 21:29:07.464107] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:32.932 bdev 6fcd7438-d9d0-4f85-a40b-09c9741044c9 reports 1 memory domains 00:25:32.932 bdev 6fcd7438-d9d0-4f85-a40b-09c9741044c9 supports RDMA memory domain 00:25:32.932 Initialization complete, running randread IO for 5 sec on 2 cores 00:25:32.932 ========================================================================== 00:25:32.932 Latency [us] 00:25:32.932 IOPS MiB/s Average min max 00:25:32.932 Core 2: 72023.63 281.34 221.25 83.99 2932.93 00:25:32.932 Core 3: 73520.46 287.19 216.74 75.41 2980.65 00:25:32.932 ========================================================================== 00:25:32.932 Total : 145544.09 568.53 218.97 75.41 2980.65 00:25:32.932 00:25:32.932 Total operations: 727805, translate 0 pull_push 0 memzero 727805 00:25:32.932 21:29:07 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:25:32.932 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.932 [2024-07-26 21:29:07.770670] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:35.481 Initializing NVMe Controllers 00:25:35.481 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:25:35.481 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:35.481 Initialization complete. Launching workers. 00:25:35.481 ======================================================== 00:25:35.481 Latency(us) 00:25:35.481 Device Information : IOPS MiB/s Average min max 00:25:35.481 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.70 7.91 7964.39 5985.29 8978.58 00:25:35.481 ======================================================== 00:25:35.481 Total : 2024.70 7.91 7964.39 5985.29 8978.58 00:25:35.481 00:25:35.481 21:29:10 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:25:35.481 21:29:10 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:25:35.481 21:29:10 -- host/dma.sh@48 -- # local subsystem=0 00:25:35.481 21:29:10 -- host/dma.sh@50 -- # jq . 00:25:35.481 [2024-07-26 21:29:10.114659] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:35.481 [2024-07-26 21:29:10.114716] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792607 ] 00:25:35.481 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.481 [2024-07-26 21:29:10.197001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:35.481 [2024-07-26 21:29:10.234307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.481 [2024-07-26 21:29:10.234309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.740 [2024-07-26 21:29:10.450295] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:41.012 [2024-07-26 21:29:15.479101] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:41.012 bdev 9db10f69-ba72-482f-99d7-9949097e56a9 reports 1 memory domains 00:25:41.012 bdev 9db10f69-ba72-482f-99d7-9949097e56a9 supports RDMA memory domain 00:25:41.012 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:41.012 ========================================================================== 00:25:41.012 Latency [us] 00:25:41.012 IOPS MiB/s Average min max 00:25:41.012 Core 2: 19322.69 75.48 827.36 12.89 9457.54 00:25:41.012 Core 3: 19792.21 77.31 807.69 19.23 9636.88 00:25:41.012 ========================================================================== 00:25:41.012 Total : 39114.90 152.79 817.41 12.89 9636.88 00:25:41.012 00:25:41.012 Total operations: 195608, translate 195503 pull_push 0 memzero 105 00:25:41.012 21:29:15 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:41.012 21:29:15 -- host/dma.sh@120 -- # nvmftestfini 00:25:41.012 21:29:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:41.012 21:29:15 -- nvmf/common.sh@116 -- # sync 00:25:41.012 21:29:15 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:41.012 21:29:15 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:41.012 21:29:15 -- nvmf/common.sh@119 -- # set +e 00:25:41.012 21:29:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:41.012 21:29:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:41.012 rmmod nvme_rdma 00:25:41.012 rmmod nvme_fabrics 00:25:41.012 21:29:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:41.012 21:29:15 -- nvmf/common.sh@123 -- # set -e 00:25:41.012 21:29:15 -- nvmf/common.sh@124 -- # return 0 00:25:41.012 21:29:15 -- nvmf/common.sh@477 -- # '[' -n 1789121 ']' 00:25:41.012 21:29:15 -- nvmf/common.sh@478 -- # killprocess 1789121 00:25:41.012 21:29:15 -- common/autotest_common.sh@926 -- # '[' -z 1789121 ']' 00:25:41.012 21:29:15 -- common/autotest_common.sh@930 -- # kill -0 1789121 00:25:41.012 21:29:15 -- common/autotest_common.sh@931 -- # uname 00:25:41.012 21:29:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:41.012 21:29:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1789121 00:25:41.012 21:29:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:41.012 21:29:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:41.012 21:29:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1789121' 00:25:41.012 killing process with pid 1789121 00:25:41.012 21:29:15 -- common/autotest_common.sh@945 -- # kill 1789121 00:25:41.012 21:29:15 -- common/autotest_common.sh@950 -- # wait 1789121 00:25:41.272 21:29:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:41.272 21:29:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:41.272 00:25:41.272 real 0m34.501s 00:25:41.272 user 1m36.722s 00:25:41.272 sys 0m7.554s 00:25:41.272 21:29:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.272 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:25:41.272 ************************************ 00:25:41.272 END TEST dma 00:25:41.272 ************************************ 00:25:41.272 21:29:16 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:41.272 21:29:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:41.272 21:29:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:41.272 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:25:41.272 ************************************ 00:25:41.272 START TEST nvmf_identify 00:25:41.272 ************************************ 00:25:41.272 21:29:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:41.531 * Looking for test storage... 00:25:41.531 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:41.531 21:29:16 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.531 21:29:16 -- nvmf/common.sh@7 -- # uname -s 00:25:41.531 21:29:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.531 21:29:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.531 21:29:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.531 21:29:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.531 21:29:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.531 21:29:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.531 21:29:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.531 21:29:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.531 21:29:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.531 21:29:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.531 21:29:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:41.531 21:29:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:41.531 21:29:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.531 21:29:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.531 21:29:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.531 21:29:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:41.531 21:29:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.531 21:29:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.531 21:29:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.531 21:29:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.531 21:29:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.531 21:29:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.531 21:29:16 -- paths/export.sh@5 -- # export PATH 00:25:41.531 21:29:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.531 21:29:16 -- nvmf/common.sh@46 -- # : 0 00:25:41.531 21:29:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:41.531 21:29:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:41.531 21:29:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:41.532 21:29:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.532 21:29:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.532 21:29:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:41.532 21:29:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:41.532 21:29:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:41.532 21:29:16 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:41.532 21:29:16 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:41.532 21:29:16 -- host/identify.sh@14 -- # nvmftestinit 00:25:41.532 21:29:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:41.532 21:29:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.532 21:29:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:41.532 21:29:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:41.532 21:29:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:41.532 21:29:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.532 21:29:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:41.532 21:29:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.532 21:29:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:41.532 21:29:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:41.532 21:29:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:41.532 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:25:49.658 21:29:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:49.658 21:29:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:49.658 21:29:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:49.658 21:29:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:49.658 21:29:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:49.658 21:29:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:49.658 21:29:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:49.658 21:29:24 -- nvmf/common.sh@294 -- # net_devs=() 00:25:49.658 21:29:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:49.658 21:29:24 -- nvmf/common.sh@295 -- # e810=() 00:25:49.658 21:29:24 -- nvmf/common.sh@295 -- # local -ga e810 00:25:49.658 21:29:24 -- nvmf/common.sh@296 -- # x722=() 00:25:49.658 21:29:24 -- nvmf/common.sh@296 -- # local -ga x722 00:25:49.658 21:29:24 -- nvmf/common.sh@297 -- # mlx=() 00:25:49.658 21:29:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:49.658 21:29:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.658 21:29:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:49.658 21:29:24 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:49.658 21:29:24 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:49.658 21:29:24 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:49.658 21:29:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:49.658 21:29:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:49.658 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:49.658 21:29:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:49.658 21:29:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:49.658 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:49.658 21:29:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:49.658 21:29:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:49.658 21:29:24 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.658 21:29:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:49.658 21:29:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.658 21:29:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:49.658 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:49.658 21:29:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.658 21:29:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.658 21:29:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:49.658 21:29:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.658 21:29:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:49.658 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:49.658 21:29:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.658 21:29:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:49.658 21:29:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:49.658 21:29:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:49.658 21:29:24 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:49.658 21:29:24 -- nvmf/common.sh@57 -- # uname 00:25:49.658 21:29:24 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:49.658 21:29:24 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:49.658 21:29:24 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:49.658 21:29:24 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:49.658 21:29:24 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:49.658 21:29:24 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:49.658 21:29:24 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:49.658 21:29:24 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:49.658 21:29:24 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:49.658 21:29:24 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:49.658 21:29:24 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:49.658 21:29:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:49.658 21:29:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:49.658 21:29:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:49.658 21:29:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:49.658 21:29:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:49.658 21:29:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:49.658 21:29:24 -- nvmf/common.sh@104 -- # continue 2 00:25:49.658 21:29:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:49.658 21:29:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:49.658 21:29:24 -- nvmf/common.sh@104 -- # continue 2 00:25:49.658 21:29:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:49.658 21:29:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:49.658 21:29:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:49.658 21:29:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:49.658 21:29:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:49.658 21:29:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:49.658 21:29:24 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:49.658 21:29:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:49.658 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:49.658 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:49.658 altname enp217s0f0np0 00:25:49.658 altname ens818f0np0 00:25:49.658 inet 192.168.100.8/24 scope global mlx_0_0 00:25:49.658 valid_lft forever preferred_lft forever 00:25:49.658 21:29:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:49.658 21:29:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:49.658 21:29:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:49.658 21:29:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:49.658 21:29:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:49.658 21:29:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:49.658 21:29:24 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:49.658 21:29:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:49.658 21:29:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:49.658 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:49.658 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:49.658 altname enp217s0f1np1 00:25:49.658 altname ens818f1np1 00:25:49.658 inet 192.168.100.9/24 scope global mlx_0_1 00:25:49.658 valid_lft forever preferred_lft forever 00:25:49.658 21:29:24 -- nvmf/common.sh@410 -- # return 0 00:25:49.658 21:29:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:49.658 21:29:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:49.659 21:29:24 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:49.659 21:29:24 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:49.659 21:29:24 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:49.659 21:29:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:49.659 21:29:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:49.659 21:29:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:49.659 21:29:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:49.659 21:29:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:49.659 21:29:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:49.659 21:29:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:49.659 21:29:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:49.659 21:29:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:49.659 21:29:24 -- nvmf/common.sh@104 -- # continue 2 00:25:49.659 21:29:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:49.659 21:29:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:49.659 21:29:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:49.659 21:29:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:49.659 21:29:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:49.659 21:29:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:49.659 21:29:24 -- nvmf/common.sh@104 -- # continue 2 00:25:49.659 21:29:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:49.659 21:29:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:49.659 21:29:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:49.659 21:29:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:49.659 21:29:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:49.659 21:29:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:49.659 21:29:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:49.659 21:29:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:49.659 21:29:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:49.659 21:29:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:49.659 21:29:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:49.659 21:29:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:49.659 21:29:24 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:49.659 192.168.100.9' 00:25:49.659 21:29:24 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:49.659 192.168.100.9' 00:25:49.659 21:29:24 -- nvmf/common.sh@445 -- # head -n 1 00:25:49.659 21:29:24 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:49.659 21:29:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:49.659 192.168.100.9' 00:25:49.659 21:29:24 -- nvmf/common.sh@446 -- # tail -n +2 00:25:49.659 21:29:24 -- nvmf/common.sh@446 -- # head -n 1 00:25:49.659 21:29:24 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:49.659 21:29:24 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:49.659 21:29:24 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:49.659 21:29:24 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:49.659 21:29:24 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:49.659 21:29:24 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:49.659 21:29:24 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:49.659 21:29:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:49.659 21:29:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.659 21:29:24 -- host/identify.sh@19 -- # nvmfpid=1797614 00:25:49.659 21:29:24 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:49.659 21:29:24 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.659 21:29:24 -- host/identify.sh@23 -- # waitforlisten 1797614 00:25:49.659 21:29:24 -- common/autotest_common.sh@819 -- # '[' -z 1797614 ']' 00:25:49.659 21:29:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.659 21:29:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.659 21:29:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.659 21:29:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.659 21:29:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.659 [2024-07-26 21:29:24.408639] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:49.659 [2024-07-26 21:29:24.408698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.659 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.659 [2024-07-26 21:29:24.496131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.918 [2024-07-26 21:29:24.536671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:49.918 [2024-07-26 21:29:24.536784] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.918 [2024-07-26 21:29:24.536794] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.918 [2024-07-26 21:29:24.536803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.918 [2024-07-26 21:29:24.536848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.918 [2024-07-26 21:29:24.536943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.918 [2024-07-26 21:29:24.537030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.918 [2024-07-26 21:29:24.537032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.486 21:29:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:50.486 21:29:25 -- common/autotest_common.sh@852 -- # return 0 00:25:50.486 21:29:25 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:50.486 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.486 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.486 [2024-07-26 21:29:25.238818] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c47060/0x1c4b550) succeed. 00:25:50.486 [2024-07-26 21:29:25.249116] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c48650/0x1c8cbe0) succeed. 00:25:50.749 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.749 21:29:25 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:50.749 21:29:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:50.749 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.749 21:29:25 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.749 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.749 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.749 Malloc0 00:25:50.749 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.749 21:29:25 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.749 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.749 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.749 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.749 21:29:25 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:50.749 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.749 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.749 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.749 21:29:25 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:50.749 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.749 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.749 [2024-07-26 21:29:25.459333] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:50.749 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.749 21:29:25 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:50.749 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.749 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.749 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.749 21:29:25 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:50.749 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.749 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:50.749 [2024-07-26 21:29:25.475042] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:50.750 [ 00:25:50.750 { 00:25:50.750 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:50.750 "subtype": "Discovery", 00:25:50.750 "listen_addresses": [ 00:25:50.750 { 00:25:50.750 "transport": "RDMA", 00:25:50.750 "trtype": "RDMA", 00:25:50.750 "adrfam": "IPv4", 00:25:50.750 "traddr": "192.168.100.8", 00:25:50.750 "trsvcid": "4420" 00:25:50.750 } 00:25:50.750 ], 00:25:50.750 "allow_any_host": true, 00:25:50.750 "hosts": [] 00:25:50.750 }, 00:25:50.750 { 00:25:50.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.750 "subtype": "NVMe", 00:25:50.750 "listen_addresses": [ 00:25:50.750 { 00:25:50.750 "transport": "RDMA", 00:25:50.750 "trtype": "RDMA", 00:25:50.750 "adrfam": "IPv4", 00:25:50.750 "traddr": "192.168.100.8", 00:25:50.750 "trsvcid": "4420" 00:25:50.750 } 00:25:50.750 ], 00:25:50.750 "allow_any_host": true, 00:25:50.750 "hosts": [], 00:25:50.750 "serial_number": "SPDK00000000000001", 00:25:50.750 "model_number": "SPDK bdev Controller", 00:25:50.750 "max_namespaces": 32, 00:25:50.750 "min_cntlid": 1, 00:25:50.750 "max_cntlid": 65519, 00:25:50.750 "namespaces": [ 00:25:50.750 { 00:25:50.750 "nsid": 1, 00:25:50.750 "bdev_name": "Malloc0", 00:25:50.750 "name": "Malloc0", 00:25:50.750 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:50.750 "eui64": "ABCDEF0123456789", 00:25:50.750 "uuid": "3dcd79ef-90f2-4de4-a329-edd1581c87ef" 00:25:50.750 } 00:25:50.750 ] 00:25:50.750 } 00:25:50.750 ] 00:25:50.750 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.750 21:29:25 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:50.750 [2024-07-26 21:29:25.517213] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:50.750 [2024-07-26 21:29:25.517262] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797700 ] 00:25:50.750 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.750 [2024-07-26 21:29:25.564292] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:50.750 [2024-07-26 21:29:25.564360] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:50.750 [2024-07-26 21:29:25.564375] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:50.750 [2024-07-26 21:29:25.564380] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:50.750 [2024-07-26 21:29:25.564412] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:50.750 [2024-07-26 21:29:25.575059] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:50.750 [2024-07-26 21:29:25.585127] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:50.750 [2024-07-26 21:29:25.585138] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:50.750 [2024-07-26 21:29:25.585146] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585154] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585160] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585166] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585173] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585179] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585188] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585194] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585200] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585207] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585213] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585219] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585225] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585232] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585238] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585244] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585250] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585256] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585263] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585269] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585275] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585281] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585287] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585294] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585300] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585306] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585312] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585318] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585325] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585331] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585337] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585343] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:50.750 [2024-07-26 21:29:25.585348] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:50.750 [2024-07-26 21:29:25.585353] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:50.750 [2024-07-26 21:29:25.585372] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.585385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:50.750 [2024-07-26 21:29:25.590632] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.750 [2024-07-26 21:29:25.590644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:50.750 [2024-07-26 21:29:25.590655] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.590663] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:50.750 [2024-07-26 21:29:25.590670] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:50.750 [2024-07-26 21:29:25.590677] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:50.750 [2024-07-26 21:29:25.590689] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.590698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.750 [2024-07-26 21:29:25.590715] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.750 [2024-07-26 21:29:25.590721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:50.750 [2024-07-26 21:29:25.590728] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:50.750 [2024-07-26 21:29:25.590734] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.590741] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:50.750 [2024-07-26 21:29:25.590748] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.590756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.750 [2024-07-26 21:29:25.590775] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.750 [2024-07-26 21:29:25.590781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:50.750 [2024-07-26 21:29:25.590788] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:50.750 [2024-07-26 21:29:25.590794] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.590801] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:50.750 [2024-07-26 21:29:25.590809] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.750 [2024-07-26 21:29:25.590816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.750 [2024-07-26 21:29:25.590834] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.750 [2024-07-26 21:29:25.590839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:50.750 [2024-07-26 21:29:25.590847] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:50.751 [2024-07-26 21:29:25.590853] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.590861] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.590869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.751 [2024-07-26 21:29:25.590888] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.590894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.590902] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:50.751 [2024-07-26 21:29:25.590908] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:50.751 [2024-07-26 21:29:25.590914] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.590921] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:50.751 [2024-07-26 21:29:25.591028] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:50.751 [2024-07-26 21:29:25.591034] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:50.751 [2024-07-26 21:29:25.591043] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.751 [2024-07-26 21:29:25.591074] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591086] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:50.751 [2024-07-26 21:29:25.591092] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591101] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.751 [2024-07-26 21:29:25.591124] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591136] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:50.751 [2024-07-26 21:29:25.591142] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:50.751 [2024-07-26 21:29:25.591148] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591155] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:50.751 [2024-07-26 21:29:25.591164] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:50.751 [2024-07-26 21:29:25.591174] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:50.751 [2024-07-26 21:29:25.591214] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591229] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:50.751 [2024-07-26 21:29:25.591236] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:50.751 [2024-07-26 21:29:25.591242] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:50.751 [2024-07-26 21:29:25.591249] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:50.751 [2024-07-26 21:29:25.591255] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:50.751 [2024-07-26 21:29:25.591261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:50.751 [2024-07-26 21:29:25.591267] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591277] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:50.751 [2024-07-26 21:29:25.591284] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.751 [2024-07-26 21:29:25.591314] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591329] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.751 [2024-07-26 21:29:25.591343] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.751 [2024-07-26 21:29:25.591357] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.751 [2024-07-26 21:29:25.591371] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.751 [2024-07-26 21:29:25.591384] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:50.751 [2024-07-26 21:29:25.591390] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:50.751 [2024-07-26 21:29:25.591408] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.751 [2024-07-26 21:29:25.591436] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591448] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:50.751 [2024-07-26 21:29:25.591458] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:50.751 [2024-07-26 21:29:25.591464] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591473] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:50.751 [2024-07-26 21:29:25.591506] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591519] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591529] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:50.751 [2024-07-26 21:29:25.591548] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x184100 00:25:50.751 [2024-07-26 21:29:25.591564] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.751 [2024-07-26 21:29:25.591595] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591611] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x184100 00:25:50.751 [2024-07-26 21:29:25.591630] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591637] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591649] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591655] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.751 [2024-07-26 21:29:25.591660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.751 [2024-07-26 21:29:25.591670] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x184100 00:25:50.751 [2024-07-26 21:29:25.591684] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:50.751 [2024-07-26 21:29:25.591706] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.752 [2024-07-26 21:29:25.591712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:50.752 [2024-07-26 21:29:25.591722] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:50.752 ===================================================== 00:25:50.752 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:50.752 ===================================================== 00:25:50.752 Controller Capabilities/Features 00:25:50.752 ================================ 00:25:50.752 Vendor ID: 0000 00:25:50.752 Subsystem Vendor ID: 0000 00:25:50.752 Serial Number: .................... 00:25:50.752 Model Number: ........................................ 00:25:50.752 Firmware Version: 24.01.1 00:25:50.752 Recommended Arb Burst: 0 00:25:50.752 IEEE OUI Identifier: 00 00 00 00:25:50.752 Multi-path I/O 00:25:50.752 May have multiple subsystem ports: No 00:25:50.752 May have multiple controllers: No 00:25:50.752 Associated with SR-IOV VF: No 00:25:50.752 Max Data Transfer Size: 131072 00:25:50.752 Max Number of Namespaces: 0 00:25:50.752 Max Number of I/O Queues: 1024 00:25:50.752 NVMe Specification Version (VS): 1.3 00:25:50.752 NVMe Specification Version (Identify): 1.3 00:25:50.752 Maximum Queue Entries: 128 00:25:50.752 Contiguous Queues Required: Yes 00:25:50.752 Arbitration Mechanisms Supported 00:25:50.752 Weighted Round Robin: Not Supported 00:25:50.752 Vendor Specific: Not Supported 00:25:50.752 Reset Timeout: 15000 ms 00:25:50.752 Doorbell Stride: 4 bytes 00:25:50.752 NVM Subsystem Reset: Not Supported 00:25:50.752 Command Sets Supported 00:25:50.752 NVM Command Set: Supported 00:25:50.752 Boot Partition: Not Supported 00:25:50.752 Memory Page Size Minimum: 4096 bytes 00:25:50.752 Memory Page Size Maximum: 4096 bytes 00:25:50.752 Persistent Memory Region: Not Supported 00:25:50.752 Optional Asynchronous Events Supported 00:25:50.752 Namespace Attribute Notices: Not Supported 00:25:50.752 Firmware Activation Notices: Not Supported 00:25:50.752 ANA Change Notices: Not Supported 00:25:50.752 PLE Aggregate Log Change Notices: Not Supported 00:25:50.752 LBA Status Info Alert Notices: Not Supported 00:25:50.752 EGE Aggregate Log Change Notices: Not Supported 00:25:50.752 Normal NVM Subsystem Shutdown event: Not Supported 00:25:50.752 Zone Descriptor Change Notices: Not Supported 00:25:50.752 Discovery Log Change Notices: Supported 00:25:50.752 Controller Attributes 00:25:50.752 128-bit Host Identifier: Not Supported 00:25:50.752 Non-Operational Permissive Mode: Not Supported 00:25:50.752 NVM Sets: Not Supported 00:25:50.752 Read Recovery Levels: Not Supported 00:25:50.752 Endurance Groups: Not Supported 00:25:50.752 Predictable Latency Mode: Not Supported 00:25:50.752 Traffic Based Keep ALive: Not Supported 00:25:50.752 Namespace Granularity: Not Supported 00:25:50.752 SQ Associations: Not Supported 00:25:50.752 UUID List: Not Supported 00:25:50.752 Multi-Domain Subsystem: Not Supported 00:25:50.752 Fixed Capacity Management: Not Supported 00:25:50.752 Variable Capacity Management: Not Supported 00:25:50.752 Delete Endurance Group: Not Supported 00:25:50.752 Delete NVM Set: Not Supported 00:25:50.752 Extended LBA Formats Supported: Not Supported 00:25:50.752 Flexible Data Placement Supported: Not Supported 00:25:50.752 00:25:50.752 Controller Memory Buffer Support 00:25:50.752 ================================ 00:25:50.752 Supported: No 00:25:50.752 00:25:50.752 Persistent Memory Region Support 00:25:50.752 ================================ 00:25:50.752 Supported: No 00:25:50.752 00:25:50.752 Admin Command Set Attributes 00:25:50.752 ============================ 00:25:50.752 Security Send/Receive: Not Supported 00:25:50.752 Format NVM: Not Supported 00:25:50.752 Firmware Activate/Download: Not Supported 00:25:50.752 Namespace Management: Not Supported 00:25:50.752 Device Self-Test: Not Supported 00:25:50.752 Directives: Not Supported 00:25:50.752 NVMe-MI: Not Supported 00:25:50.752 Virtualization Management: Not Supported 00:25:50.752 Doorbell Buffer Config: Not Supported 00:25:50.752 Get LBA Status Capability: Not Supported 00:25:50.752 Command & Feature Lockdown Capability: Not Supported 00:25:50.752 Abort Command Limit: 1 00:25:50.752 Async Event Request Limit: 4 00:25:50.752 Number of Firmware Slots: N/A 00:25:50.752 Firmware Slot 1 Read-Only: N/A 00:25:50.752 Firmware Activation Without Reset: N/A 00:25:50.752 Multiple Update Detection Support: N/A 00:25:50.752 Firmware Update Granularity: No Information Provided 00:25:50.752 Per-Namespace SMART Log: No 00:25:50.752 Asymmetric Namespace Access Log Page: Not Supported 00:25:50.752 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:50.752 Command Effects Log Page: Not Supported 00:25:50.752 Get Log Page Extended Data: Supported 00:25:50.752 Telemetry Log Pages: Not Supported 00:25:50.752 Persistent Event Log Pages: Not Supported 00:25:50.752 Supported Log Pages Log Page: May Support 00:25:50.752 Commands Supported & Effects Log Page: Not Supported 00:25:50.752 Feature Identifiers & Effects Log Page:May Support 00:25:50.752 NVMe-MI Commands & Effects Log Page: May Support 00:25:50.752 Data Area 4 for Telemetry Log: Not Supported 00:25:50.752 Error Log Page Entries Supported: 128 00:25:50.752 Keep Alive: Not Supported 00:25:50.752 00:25:50.752 NVM Command Set Attributes 00:25:50.752 ========================== 00:25:50.752 Submission Queue Entry Size 00:25:50.752 Max: 1 00:25:50.752 Min: 1 00:25:50.752 Completion Queue Entry Size 00:25:50.752 Max: 1 00:25:50.752 Min: 1 00:25:50.752 Number of Namespaces: 0 00:25:50.752 Compare Command: Not Supported 00:25:50.752 Write Uncorrectable Command: Not Supported 00:25:50.752 Dataset Management Command: Not Supported 00:25:50.752 Write Zeroes Command: Not Supported 00:25:50.752 Set Features Save Field: Not Supported 00:25:50.752 Reservations: Not Supported 00:25:50.752 Timestamp: Not Supported 00:25:50.752 Copy: Not Supported 00:25:50.752 Volatile Write Cache: Not Present 00:25:50.752 Atomic Write Unit (Normal): 1 00:25:50.752 Atomic Write Unit (PFail): 1 00:25:50.752 Atomic Compare & Write Unit: 1 00:25:50.752 Fused Compare & Write: Supported 00:25:50.752 Scatter-Gather List 00:25:50.752 SGL Command Set: Supported 00:25:50.752 SGL Keyed: Supported 00:25:50.752 SGL Bit Bucket Descriptor: Not Supported 00:25:50.752 SGL Metadata Pointer: Not Supported 00:25:50.752 Oversized SGL: Not Supported 00:25:50.752 SGL Metadata Address: Not Supported 00:25:50.752 SGL Offset: Supported 00:25:50.752 Transport SGL Data Block: Not Supported 00:25:50.752 Replay Protected Memory Block: Not Supported 00:25:50.752 00:25:50.752 Firmware Slot Information 00:25:50.752 ========================= 00:25:50.752 Active slot: 0 00:25:50.752 00:25:50.752 00:25:50.752 Error Log 00:25:50.752 ========= 00:25:50.752 00:25:50.752 Active Namespaces 00:25:50.752 ================= 00:25:50.752 Discovery Log Page 00:25:50.752 ================== 00:25:50.752 Generation Counter: 2 00:25:50.752 Number of Records: 2 00:25:50.752 Record Format: 0 00:25:50.752 00:25:50.752 Discovery Log Entry 0 00:25:50.752 ---------------------- 00:25:50.752 Transport Type: 1 (RDMA) 00:25:50.752 Address Family: 1 (IPv4) 00:25:50.752 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:50.752 Entry Flags: 00:25:50.752 Duplicate Returned Information: 1 00:25:50.752 Explicit Persistent Connection Support for Discovery: 1 00:25:50.752 Transport Requirements: 00:25:50.752 Secure Channel: Not Required 00:25:50.752 Port ID: 0 (0x0000) 00:25:50.752 Controller ID: 65535 (0xffff) 00:25:50.752 Admin Max SQ Size: 128 00:25:50.752 Transport Service Identifier: 4420 00:25:50.752 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:50.752 Transport Address: 192.168.100.8 00:25:50.752 Transport Specific Address Subtype - RDMA 00:25:50.752 RDMA QP Service Type: 1 (Reliable Connected) 00:25:50.752 RDMA Provider Type: 1 (No provider specified) 00:25:50.752 RDMA CM Service: 1 (RDMA_CM) 00:25:50.752 Discovery Log Entry 1 00:25:50.752 ---------------------- 00:25:50.752 Transport Type: 1 (RDMA) 00:25:50.752 Address Family: 1 (IPv4) 00:25:50.752 Subsystem Type: 2 (NVM Subsystem) 00:25:50.752 Entry Flags: 00:25:50.752 Duplicate Returned Information: 0 00:25:50.752 Explicit Persistent Connection Support for Discovery: 0 00:25:50.752 Transport Requirements: 00:25:50.752 Secure Channel: Not Required 00:25:50.752 Port ID: 0 (0x0000) 00:25:50.753 Controller ID: 65535 (0xffff) 00:25:50.753 Admin Max SQ Size: [2024-07-26 21:29:25.591794] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:50.753 [2024-07-26 21:29:25.591804] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20369 doesn't match qid 00:25:50.753 [2024-07-26 21:29:25.591818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32749 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.591824] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20369 doesn't match qid 00:25:50.753 [2024-07-26 21:29:25.591833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32749 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.591840] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20369 doesn't match qid 00:25:50.753 [2024-07-26 21:29:25.591848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32749 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.591855] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20369 doesn't match qid 00:25:50.753 [2024-07-26 21:29:25.591863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32749 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.591873] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.591881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.591897] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.591904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.591913] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.591921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.591927] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.591946] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.591953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.591959] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:50.753 [2024-07-26 21:29:25.591966] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:50.753 [2024-07-26 21:29:25.591973] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.591982] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.591990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592005] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592019] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592028] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592052] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592067] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592076] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592106] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592118] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592126] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592154] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592166] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592175] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592203] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592215] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592224] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592255] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592267] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592276] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592302] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592314] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592322] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592348] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592361] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592370] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592400] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592412] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592420] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592444] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592456] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592465] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592489] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.753 [2024-07-26 21:29:25.592494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:50.753 [2024-07-26 21:29:25.592500] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592509] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.753 [2024-07-26 21:29:25.592517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.753 [2024-07-26 21:29:25.592539] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592551] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592559] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592587] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592599] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592607] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592637] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592649] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592658] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592688] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592699] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592708] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592732] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592744] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592752] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592776] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592788] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592797] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592818] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592830] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592839] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592871] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592882] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592891] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592914] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592926] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592935] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.592958] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.592964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.592971] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592979] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.592987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.593009] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.593014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.593021] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593029] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.593061] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.593066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.593073] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593081] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.593105] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.593110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.593117] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593126] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.593149] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.754 [2024-07-26 21:29:25.593155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:50.754 [2024-07-26 21:29:25.593161] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593170] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.754 [2024-07-26 21:29:25.593178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.754 [2024-07-26 21:29:25.593197] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593209] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593218] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593242] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593253] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593262] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593292] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593304] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593312] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593342] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593354] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593362] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593386] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593398] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593407] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593434] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593446] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593455] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593486] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593498] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593507] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593529] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593540] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593549] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593575] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593587] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593595] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593621] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593637] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593646] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593671] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593683] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593692] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593714] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593726] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593734] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593767] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593779] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593788] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593813] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593825] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593834] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593862] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593874] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593882] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593914] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593926] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593934] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.755 [2024-07-26 21:29:25.593966] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.755 [2024-07-26 21:29:25.593971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:50.755 [2024-07-26 21:29:25.593978] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593987] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.755 [2024-07-26 21:29:25.593994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594012] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594024] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594034] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594056] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594068] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594077] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594102] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594114] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594123] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594150] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594162] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594171] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594201] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594212] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594221] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594251] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594263] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594271] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594295] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594307] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594317] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594343] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594355] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594363] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594387] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594399] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594408] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594434] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594445] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594454] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594480] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594492] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594500] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594532] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594543] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594552] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.594580] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.594585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.594593] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594602] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.594609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.598634] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.598642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.598649] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.598658] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.598666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:50.756 [2024-07-26 21:29:25.598688] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:50.756 [2024-07-26 21:29:25.598694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:25:50.756 [2024-07-26 21:29:25.598700] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:50.756 [2024-07-26 21:29:25.598707] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:51.020 128 00:25:51.020 Transport Service Identifier: 4420 00:25:51.020 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:51.020 Transport Address: 192.168.100.8 00:25:51.020 Transport Specific Address Subtype - RDMA 00:25:51.020 RDMA QP Service Type: 1 (Reliable Connected) 00:25:51.020 RDMA Provider Type: 1 (No provider specified) 00:25:51.020 RDMA CM Service: 1 (RDMA_CM) 00:25:51.020 21:29:25 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:51.020 [2024-07-26 21:29:25.667040] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:51.020 [2024-07-26 21:29:25.667098] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797771 ] 00:25:51.020 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.020 [2024-07-26 21:29:25.713766] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:51.020 [2024-07-26 21:29:25.713832] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:51.020 [2024-07-26 21:29:25.713848] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:51.020 [2024-07-26 21:29:25.713853] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:51.020 [2024-07-26 21:29:25.713877] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:51.020 [2024-07-26 21:29:25.724036] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:51.020 [2024-07-26 21:29:25.734101] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:51.020 [2024-07-26 21:29:25.734111] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:51.020 [2024-07-26 21:29:25.734118] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734128] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734134] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734141] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734147] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734153] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734159] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734166] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734172] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734178] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734184] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734191] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734197] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734203] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734209] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734216] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734222] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734228] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734234] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734241] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734247] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734253] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734259] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734266] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734272] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734278] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734284] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734291] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734297] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734303] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:51.020 [2024-07-26 21:29:25.734310] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.734315] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:51.021 [2024-07-26 21:29:25.734322] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:51.021 [2024-07-26 21:29:25.734327] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:51.021 [2024-07-26 21:29:25.734342] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.734353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:51.021 [2024-07-26 21:29:25.739633] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.739642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.739650] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739657] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:51.021 [2024-07-26 21:29:25.739664] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:51.021 [2024-07-26 21:29:25.739670] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:51.021 [2024-07-26 21:29:25.739681] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.739705] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.739711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.739717] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:51.021 [2024-07-26 21:29:25.739723] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739730] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:51.021 [2024-07-26 21:29:25.739738] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.739764] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.739769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.739776] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:51.021 [2024-07-26 21:29:25.739782] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739789] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:51.021 [2024-07-26 21:29:25.739796] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.739822] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.739828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.739835] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:51.021 [2024-07-26 21:29:25.739843] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739851] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.739881] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.739887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.739893] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:51.021 [2024-07-26 21:29:25.739899] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:51.021 [2024-07-26 21:29:25.739905] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.739912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:51.021 [2024-07-26 21:29:25.740019] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:51.021 [2024-07-26 21:29:25.740024] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:51.021 [2024-07-26 21:29:25.740032] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.740056] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.740062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.740068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:51.021 [2024-07-26 21:29:25.740074] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740082] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.740106] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.740112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.740118] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:51.021 [2024-07-26 21:29:25.740124] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:51.021 [2024-07-26 21:29:25.740130] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740137] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:51.021 [2024-07-26 21:29:25.740145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:51.021 [2024-07-26 21:29:25.740154] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:51.021 [2024-07-26 21:29:25.740201] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.740206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.740215] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:51.021 [2024-07-26 21:29:25.740221] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:51.021 [2024-07-26 21:29:25.740227] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:51.021 [2024-07-26 21:29:25.740232] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:51.021 [2024-07-26 21:29:25.740238] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:51.021 [2024-07-26 21:29:25.740244] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:51.021 [2024-07-26 21:29:25.740250] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740259] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:51.021 [2024-07-26 21:29:25.740267] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.740299] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.021 [2024-07-26 21:29:25.740305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:51.021 [2024-07-26 21:29:25.740313] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.021 [2024-07-26 21:29:25.740327] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.021 [2024-07-26 21:29:25.740341] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.021 [2024-07-26 21:29:25.740355] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.021 [2024-07-26 21:29:25.740368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:51.021 [2024-07-26 21:29:25.740374] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740384] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:51.021 [2024-07-26 21:29:25.740391] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.021 [2024-07-26 21:29:25.740399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.021 [2024-07-26 21:29:25.740415] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740428] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:51.022 [2024-07-26 21:29:25.740435] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740441] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740457] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740465] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.022 [2024-07-26 21:29:25.740492] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740546] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740552] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740560] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740568] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184100 00:25:51.022 [2024-07-26 21:29:25.740604] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740624] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:51.022 [2024-07-26 21:29:25.740638] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740644] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740653] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740661] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:51.022 [2024-07-26 21:29:25.740696] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740715] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740723] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740740] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:51.022 [2024-07-26 21:29:25.740774] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740794] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740811] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740819] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740825] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740832] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:51.022 [2024-07-26 21:29:25.740838] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:51.022 [2024-07-26 21:29:25.740844] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:51.022 [2024-07-26 21:29:25.740859] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.022 [2024-07-26 21:29:25.740874] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.022 [2024-07-26 21:29:25.740892] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740904] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740910] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740922] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740931] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.022 [2024-07-26 21:29:25.740960] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.740966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.740972] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740981] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.740989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.022 [2024-07-26 21:29:25.741012] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.741018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.741024] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.022 [2024-07-26 21:29:25.741061] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.741067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.741073] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741085] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x184100 00:25:51.022 [2024-07-26 21:29:25.741101] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x184100 00:25:51.022 [2024-07-26 21:29:25.741117] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x184100 00:25:51.022 [2024-07-26 21:29:25.741133] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x184100 00:25:51.022 [2024-07-26 21:29:25.741149] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.741154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.741167] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741174] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.022 [2024-07-26 21:29:25.741179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:51.022 [2024-07-26 21:29:25.741188] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:51.022 [2024-07-26 21:29:25.741196] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.023 [2024-07-26 21:29:25.741202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:51.023 [2024-07-26 21:29:25.741209] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:51.023 [2024-07-26 21:29:25.741215] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.023 [2024-07-26 21:29:25.741220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:51.023 [2024-07-26 21:29:25.741231] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:51.023 ===================================================== 00:25:51.023 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.023 ===================================================== 00:25:51.023 Controller Capabilities/Features 00:25:51.023 ================================ 00:25:51.023 Vendor ID: 8086 00:25:51.023 Subsystem Vendor ID: 8086 00:25:51.023 Serial Number: SPDK00000000000001 00:25:51.023 Model Number: SPDK bdev Controller 00:25:51.023 Firmware Version: 24.01.1 00:25:51.023 Recommended Arb Burst: 6 00:25:51.023 IEEE OUI Identifier: e4 d2 5c 00:25:51.023 Multi-path I/O 00:25:51.023 May have multiple subsystem ports: Yes 00:25:51.023 May have multiple controllers: Yes 00:25:51.023 Associated with SR-IOV VF: No 00:25:51.023 Max Data Transfer Size: 131072 00:25:51.023 Max Number of Namespaces: 32 00:25:51.023 Max Number of I/O Queues: 127 00:25:51.023 NVMe Specification Version (VS): 1.3 00:25:51.023 NVMe Specification Version (Identify): 1.3 00:25:51.023 Maximum Queue Entries: 128 00:25:51.023 Contiguous Queues Required: Yes 00:25:51.023 Arbitration Mechanisms Supported 00:25:51.023 Weighted Round Robin: Not Supported 00:25:51.023 Vendor Specific: Not Supported 00:25:51.023 Reset Timeout: 15000 ms 00:25:51.023 Doorbell Stride: 4 bytes 00:25:51.023 NVM Subsystem Reset: Not Supported 00:25:51.023 Command Sets Supported 00:25:51.023 NVM Command Set: Supported 00:25:51.023 Boot Partition: Not Supported 00:25:51.023 Memory Page Size Minimum: 4096 bytes 00:25:51.023 Memory Page Size Maximum: 4096 bytes 00:25:51.023 Persistent Memory Region: Not Supported 00:25:51.023 Optional Asynchronous Events Supported 00:25:51.023 Namespace Attribute Notices: Supported 00:25:51.023 Firmware Activation Notices: Not Supported 00:25:51.023 ANA Change Notices: Not Supported 00:25:51.023 PLE Aggregate Log Change Notices: Not Supported 00:25:51.023 LBA Status Info Alert Notices: Not Supported 00:25:51.023 EGE Aggregate Log Change Notices: Not Supported 00:25:51.023 Normal NVM Subsystem Shutdown event: Not Supported 00:25:51.023 Zone Descriptor Change Notices: Not Supported 00:25:51.023 Discovery Log Change Notices: Not Supported 00:25:51.023 Controller Attributes 00:25:51.023 128-bit Host Identifier: Supported 00:25:51.023 Non-Operational Permissive Mode: Not Supported 00:25:51.023 NVM Sets: Not Supported 00:25:51.023 Read Recovery Levels: Not Supported 00:25:51.023 Endurance Groups: Not Supported 00:25:51.023 Predictable Latency Mode: Not Supported 00:25:51.023 Traffic Based Keep ALive: Not Supported 00:25:51.023 Namespace Granularity: Not Supported 00:25:51.023 SQ Associations: Not Supported 00:25:51.023 UUID List: Not Supported 00:25:51.023 Multi-Domain Subsystem: Not Supported 00:25:51.023 Fixed Capacity Management: Not Supported 00:25:51.023 Variable Capacity Management: Not Supported 00:25:51.023 Delete Endurance Group: Not Supported 00:25:51.023 Delete NVM Set: Not Supported 00:25:51.023 Extended LBA Formats Supported: Not Supported 00:25:51.023 Flexible Data Placement Supported: Not Supported 00:25:51.023 00:25:51.023 Controller Memory Buffer Support 00:25:51.023 ================================ 00:25:51.023 Supported: No 00:25:51.023 00:25:51.023 Persistent Memory Region Support 00:25:51.023 ================================ 00:25:51.023 Supported: No 00:25:51.023 00:25:51.023 Admin Command Set Attributes 00:25:51.023 ============================ 00:25:51.023 Security Send/Receive: Not Supported 00:25:51.023 Format NVM: Not Supported 00:25:51.023 Firmware Activate/Download: Not Supported 00:25:51.023 Namespace Management: Not Supported 00:25:51.023 Device Self-Test: Not Supported 00:25:51.023 Directives: Not Supported 00:25:51.023 NVMe-MI: Not Supported 00:25:51.023 Virtualization Management: Not Supported 00:25:51.023 Doorbell Buffer Config: Not Supported 00:25:51.023 Get LBA Status Capability: Not Supported 00:25:51.023 Command & Feature Lockdown Capability: Not Supported 00:25:51.023 Abort Command Limit: 4 00:25:51.023 Async Event Request Limit: 4 00:25:51.023 Number of Firmware Slots: N/A 00:25:51.023 Firmware Slot 1 Read-Only: N/A 00:25:51.023 Firmware Activation Without Reset: N/A 00:25:51.023 Multiple Update Detection Support: N/A 00:25:51.023 Firmware Update Granularity: No Information Provided 00:25:51.023 Per-Namespace SMART Log: No 00:25:51.023 Asymmetric Namespace Access Log Page: Not Supported 00:25:51.023 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:51.023 Command Effects Log Page: Supported 00:25:51.023 Get Log Page Extended Data: Supported 00:25:51.023 Telemetry Log Pages: Not Supported 00:25:51.023 Persistent Event Log Pages: Not Supported 00:25:51.023 Supported Log Pages Log Page: May Support 00:25:51.023 Commands Supported & Effects Log Page: Not Supported 00:25:51.023 Feature Identifiers & Effects Log Page:May Support 00:25:51.023 NVMe-MI Commands & Effects Log Page: May Support 00:25:51.023 Data Area 4 for Telemetry Log: Not Supported 00:25:51.023 Error Log Page Entries Supported: 128 00:25:51.023 Keep Alive: Supported 00:25:51.023 Keep Alive Granularity: 10000 ms 00:25:51.023 00:25:51.023 NVM Command Set Attributes 00:25:51.023 ========================== 00:25:51.023 Submission Queue Entry Size 00:25:51.023 Max: 64 00:25:51.023 Min: 64 00:25:51.023 Completion Queue Entry Size 00:25:51.023 Max: 16 00:25:51.023 Min: 16 00:25:51.023 Number of Namespaces: 32 00:25:51.023 Compare Command: Supported 00:25:51.023 Write Uncorrectable Command: Not Supported 00:25:51.023 Dataset Management Command: Supported 00:25:51.023 Write Zeroes Command: Supported 00:25:51.023 Set Features Save Field: Not Supported 00:25:51.023 Reservations: Supported 00:25:51.023 Timestamp: Not Supported 00:25:51.023 Copy: Supported 00:25:51.023 Volatile Write Cache: Present 00:25:51.023 Atomic Write Unit (Normal): 1 00:25:51.023 Atomic Write Unit (PFail): 1 00:25:51.023 Atomic Compare & Write Unit: 1 00:25:51.023 Fused Compare & Write: Supported 00:25:51.023 Scatter-Gather List 00:25:51.023 SGL Command Set: Supported 00:25:51.023 SGL Keyed: Supported 00:25:51.023 SGL Bit Bucket Descriptor: Not Supported 00:25:51.023 SGL Metadata Pointer: Not Supported 00:25:51.023 Oversized SGL: Not Supported 00:25:51.023 SGL Metadata Address: Not Supported 00:25:51.023 SGL Offset: Supported 00:25:51.023 Transport SGL Data Block: Not Supported 00:25:51.023 Replay Protected Memory Block: Not Supported 00:25:51.023 00:25:51.023 Firmware Slot Information 00:25:51.023 ========================= 00:25:51.023 Active slot: 1 00:25:51.023 Slot 1 Firmware Revision: 24.01.1 00:25:51.023 00:25:51.023 00:25:51.023 Commands Supported and Effects 00:25:51.023 ============================== 00:25:51.023 Admin Commands 00:25:51.023 -------------- 00:25:51.023 Get Log Page (02h): Supported 00:25:51.023 Identify (06h): Supported 00:25:51.023 Abort (08h): Supported 00:25:51.023 Set Features (09h): Supported 00:25:51.023 Get Features (0Ah): Supported 00:25:51.023 Asynchronous Event Request (0Ch): Supported 00:25:51.023 Keep Alive (18h): Supported 00:25:51.023 I/O Commands 00:25:51.023 ------------ 00:25:51.023 Flush (00h): Supported LBA-Change 00:25:51.023 Write (01h): Supported LBA-Change 00:25:51.023 Read (02h): Supported 00:25:51.023 Compare (05h): Supported 00:25:51.023 Write Zeroes (08h): Supported LBA-Change 00:25:51.023 Dataset Management (09h): Supported LBA-Change 00:25:51.023 Copy (19h): Supported LBA-Change 00:25:51.023 Unknown (79h): Supported LBA-Change 00:25:51.023 Unknown (7Ah): Supported 00:25:51.023 00:25:51.023 Error Log 00:25:51.023 ========= 00:25:51.023 00:25:51.023 Arbitration 00:25:51.023 =========== 00:25:51.023 Arbitration Burst: 1 00:25:51.023 00:25:51.023 Power Management 00:25:51.023 ================ 00:25:51.023 Number of Power States: 1 00:25:51.023 Current Power State: Power State #0 00:25:51.023 Power State #0: 00:25:51.023 Max Power: 0.00 W 00:25:51.023 Non-Operational State: Operational 00:25:51.023 Entry Latency: Not Reported 00:25:51.023 Exit Latency: Not Reported 00:25:51.023 Relative Read Throughput: 0 00:25:51.023 Relative Read Latency: 0 00:25:51.024 Relative Write Throughput: 0 00:25:51.024 Relative Write Latency: 0 00:25:51.024 Idle Power: Not Reported 00:25:51.024 Active Power: Not Reported 00:25:51.024 Non-Operational Permissive Mode: Not Supported 00:25:51.024 00:25:51.024 Health Information 00:25:51.024 ================== 00:25:51.024 Critical Warnings: 00:25:51.024 Available Spare Space: OK 00:25:51.024 Temperature: OK 00:25:51.024 Device Reliability: OK 00:25:51.024 Read Only: No 00:25:51.024 Volatile Memory Backup: OK 00:25:51.024 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:51.024 Temperature Threshol[2024-07-26 21:29:25.741314] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741341] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741353] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741376] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:51.024 [2024-07-26 21:29:25.741385] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50235 doesn't match qid 00:25:51.024 [2024-07-26 21:29:25.741400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32557 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741407] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50235 doesn't match qid 00:25:51.024 [2024-07-26 21:29:25.741416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32557 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741422] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50235 doesn't match qid 00:25:51.024 [2024-07-26 21:29:25.741431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32557 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741437] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50235 doesn't match qid 00:25:51.024 [2024-07-26 21:29:25.741445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32557 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741454] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741480] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741495] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741509] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741526] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741540] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:51.024 [2024-07-26 21:29:25.741548] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:51.024 [2024-07-26 21:29:25.741555] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741564] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741589] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741602] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741612] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741640] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741652] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741661] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741689] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741701] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741710] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741741] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741753] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741762] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741788] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741800] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741809] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741835] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741849] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741858] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741884] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741896] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741904] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741926] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741938] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741947] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.741969] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.741974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.741981] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741989] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.741997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.742019] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.742024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.742031] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.742039] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.742047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.024 [2024-07-26 21:29:25.742065] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.024 [2024-07-26 21:29:25.742071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:51.024 [2024-07-26 21:29:25.742077] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:51.024 [2024-07-26 21:29:25.742086] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742107] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742119] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742128] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742153] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742165] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742174] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742197] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742209] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742218] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742245] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742257] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742266] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742295] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742307] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742316] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742341] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742353] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742362] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742394] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742406] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742415] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742438] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742450] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742459] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742483] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742494] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742503] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742536] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742548] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742557] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742586] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742598] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742607] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742636] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742648] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742656] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742681] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:51.025 [2024-07-26 21:29:25.742693] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742702] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.025 [2024-07-26 21:29:25.742710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.025 [2024-07-26 21:29:25.742725] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.025 [2024-07-26 21:29:25.742731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.742737] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742746] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.742770] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.742775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.742781] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742790] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.742820] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.742825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.742832] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742840] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.742864] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.742869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.742876] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742884] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.742909] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.742914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.742921] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742929] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.742956] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.742962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.742968] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742977] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.742985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.742999] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743011] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743019] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743050] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743062] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743071] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743100] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743112] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743121] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743145] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743156] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743165] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743191] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743202] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743213] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743240] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743252] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743261] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743286] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743298] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743307] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743332] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743344] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743353] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743376] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743388] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743397] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743422] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743434] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743443] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743472] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743484] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743494] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.026 [2024-07-26 21:29:25.743520] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.026 [2024-07-26 21:29:25.743525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:51.026 [2024-07-26 21:29:25.743532] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:51.026 [2024-07-26 21:29:25.743541] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.027 [2024-07-26 21:29:25.743548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.027 [2024-07-26 21:29:25.743564] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.027 [2024-07-26 21:29:25.743570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:51.027 [2024-07-26 21:29:25.743576] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:51.027 [2024-07-26 21:29:25.743585] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.027 [2024-07-26 21:29:25.743592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.027 [2024-07-26 21:29:25.743610] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.027 [2024-07-26 21:29:25.743616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:51.027 [2024-07-26 21:29:25.743622] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:51.027 [2024-07-26 21:29:25.747639] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:51.027 [2024-07-26 21:29:25.747647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:51.027 [2024-07-26 21:29:25.747671] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:51.027 [2024-07-26 21:29:25.747677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0 00:25:51.027 [2024-07-26 21:29:25.747683] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:51.027 [2024-07-26 21:29:25.747690] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:51.027 d: 0 Kelvin (-273 Celsius) 00:25:51.027 Available Spare: 0% 00:25:51.027 Available Spare Threshold: 0% 00:25:51.027 Life Percentage Used: 0% 00:25:51.027 Data Units Read: 0 00:25:51.027 Data Units Written: 0 00:25:51.027 Host Read Commands: 0 00:25:51.027 Host Write Commands: 0 00:25:51.027 Controller Busy Time: 0 minutes 00:25:51.027 Power Cycles: 0 00:25:51.027 Power On Hours: 0 hours 00:25:51.027 Unsafe Shutdowns: 0 00:25:51.027 Unrecoverable Media Errors: 0 00:25:51.027 Lifetime Error Log Entries: 0 00:25:51.027 Warning Temperature Time: 0 minutes 00:25:51.027 Critical Temperature Time: 0 minutes 00:25:51.027 00:25:51.027 Number of Queues 00:25:51.027 ================ 00:25:51.027 Number of I/O Submission Queues: 127 00:25:51.027 Number of I/O Completion Queues: 127 00:25:51.027 00:25:51.027 Active Namespaces 00:25:51.027 ================= 00:25:51.027 Namespace ID:1 00:25:51.027 Error Recovery Timeout: Unlimited 00:25:51.027 Command Set Identifier: NVM (00h) 00:25:51.027 Deallocate: Supported 00:25:51.027 Deallocated/Unwritten Error: Not Supported 00:25:51.027 Deallocated Read Value: Unknown 00:25:51.027 Deallocate in Write Zeroes: Not Supported 00:25:51.027 Deallocated Guard Field: 0xFFFF 00:25:51.027 Flush: Supported 00:25:51.027 Reservation: Supported 00:25:51.027 Namespace Sharing Capabilities: Multiple Controllers 00:25:51.027 Size (in LBAs): 131072 (0GiB) 00:25:51.027 Capacity (in LBAs): 131072 (0GiB) 00:25:51.027 Utilization (in LBAs): 131072 (0GiB) 00:25:51.027 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:51.027 EUI64: ABCDEF0123456789 00:25:51.027 UUID: 3dcd79ef-90f2-4de4-a329-edd1581c87ef 00:25:51.027 Thin Provisioning: Not Supported 00:25:51.027 Per-NS Atomic Units: Yes 00:25:51.027 Atomic Boundary Size (Normal): 0 00:25:51.027 Atomic Boundary Size (PFail): 0 00:25:51.027 Atomic Boundary Offset: 0 00:25:51.027 Maximum Single Source Range Length: 65535 00:25:51.027 Maximum Copy Length: 65535 00:25:51.027 Maximum Source Range Count: 1 00:25:51.027 NGUID/EUI64 Never Reused: No 00:25:51.027 Namespace Write Protected: No 00:25:51.027 Number of LBA Formats: 1 00:25:51.027 Current LBA Format: LBA Format #00 00:25:51.027 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:51.027 00:25:51.027 21:29:25 -- host/identify.sh@51 -- # sync 00:25:51.027 21:29:25 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.027 21:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.027 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:25:51.027 21:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.027 21:29:25 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:51.027 21:29:25 -- host/identify.sh@56 -- # nvmftestfini 00:25:51.027 21:29:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:51.027 21:29:25 -- nvmf/common.sh@116 -- # sync 00:25:51.027 21:29:25 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:51.027 21:29:25 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:51.027 21:29:25 -- nvmf/common.sh@119 -- # set +e 00:25:51.027 21:29:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:51.027 21:29:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:51.027 rmmod nvme_rdma 00:25:51.027 rmmod nvme_fabrics 00:25:51.027 21:29:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:51.027 21:29:25 -- nvmf/common.sh@123 -- # set -e 00:25:51.027 21:29:25 -- nvmf/common.sh@124 -- # return 0 00:25:51.027 21:29:25 -- nvmf/common.sh@477 -- # '[' -n 1797614 ']' 00:25:51.027 21:29:25 -- nvmf/common.sh@478 -- # killprocess 1797614 00:25:51.027 21:29:25 -- common/autotest_common.sh@926 -- # '[' -z 1797614 ']' 00:25:51.027 21:29:25 -- common/autotest_common.sh@930 -- # kill -0 1797614 00:25:51.027 21:29:25 -- common/autotest_common.sh@931 -- # uname 00:25:51.027 21:29:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:51.027 21:29:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1797614 00:25:51.287 21:29:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:51.287 21:29:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:51.287 21:29:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1797614' 00:25:51.287 killing process with pid 1797614 00:25:51.287 21:29:25 -- common/autotest_common.sh@945 -- # kill 1797614 00:25:51.287 [2024-07-26 21:29:25.903563] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:51.287 21:29:25 -- common/autotest_common.sh@950 -- # wait 1797614 00:25:51.547 21:29:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:51.547 21:29:26 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:51.547 00:25:51.547 real 0m10.033s 00:25:51.547 user 0m8.771s 00:25:51.547 sys 0m6.588s 00:25:51.547 21:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.547 21:29:26 -- common/autotest_common.sh@10 -- # set +x 00:25:51.547 ************************************ 00:25:51.547 END TEST nvmf_identify 00:25:51.547 ************************************ 00:25:51.547 21:29:26 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:51.547 21:29:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:51.547 21:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:51.547 21:29:26 -- common/autotest_common.sh@10 -- # set +x 00:25:51.547 ************************************ 00:25:51.547 START TEST nvmf_perf 00:25:51.547 ************************************ 00:25:51.547 21:29:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:51.547 * Looking for test storage... 00:25:51.547 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:51.547 21:29:26 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.547 21:29:26 -- nvmf/common.sh@7 -- # uname -s 00:25:51.547 21:29:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.547 21:29:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.547 21:29:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.547 21:29:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.547 21:29:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.547 21:29:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.547 21:29:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.547 21:29:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.547 21:29:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.547 21:29:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.547 21:29:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:51.547 21:29:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:51.547 21:29:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.547 21:29:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.547 21:29:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.547 21:29:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:51.547 21:29:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.547 21:29:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.547 21:29:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.547 21:29:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.547 21:29:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.547 21:29:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.547 21:29:26 -- paths/export.sh@5 -- # export PATH 00:25:51.547 21:29:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.547 21:29:26 -- nvmf/common.sh@46 -- # : 0 00:25:51.547 21:29:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:51.547 21:29:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:51.547 21:29:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:51.547 21:29:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.547 21:29:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.547 21:29:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:51.547 21:29:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:51.547 21:29:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:51.547 21:29:26 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:51.547 21:29:26 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:51.547 21:29:26 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:51.547 21:29:26 -- host/perf.sh@17 -- # nvmftestinit 00:25:51.547 21:29:26 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:51.547 21:29:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.547 21:29:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:51.547 21:29:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:51.547 21:29:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:51.547 21:29:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.548 21:29:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.548 21:29:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.548 21:29:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:51.548 21:29:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:51.548 21:29:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:51.548 21:29:26 -- common/autotest_common.sh@10 -- # set +x 00:25:59.670 21:29:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:59.670 21:29:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:59.670 21:29:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:59.670 21:29:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:59.670 21:29:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:59.670 21:29:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:59.670 21:29:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:59.670 21:29:34 -- nvmf/common.sh@294 -- # net_devs=() 00:25:59.670 21:29:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:59.670 21:29:34 -- nvmf/common.sh@295 -- # e810=() 00:25:59.670 21:29:34 -- nvmf/common.sh@295 -- # local -ga e810 00:25:59.670 21:29:34 -- nvmf/common.sh@296 -- # x722=() 00:25:59.670 21:29:34 -- nvmf/common.sh@296 -- # local -ga x722 00:25:59.670 21:29:34 -- nvmf/common.sh@297 -- # mlx=() 00:25:59.670 21:29:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:59.670 21:29:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.670 21:29:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:59.670 21:29:34 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:59.670 21:29:34 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:59.670 21:29:34 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:59.670 21:29:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:59.670 21:29:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:59.670 21:29:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:59.670 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:59.670 21:29:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:59.670 21:29:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:59.670 21:29:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:59.670 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:59.670 21:29:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:59.670 21:29:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:59.670 21:29:34 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:59.670 21:29:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.670 21:29:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:59.670 21:29:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.670 21:29:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:59.670 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:59.670 21:29:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.670 21:29:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:59.670 21:29:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.670 21:29:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:59.670 21:29:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.670 21:29:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:59.670 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:59.670 21:29:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.670 21:29:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:59.670 21:29:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:59.670 21:29:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:59.670 21:29:34 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:59.670 21:29:34 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:59.670 21:29:34 -- nvmf/common.sh@57 -- # uname 00:25:59.670 21:29:34 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:59.670 21:29:34 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:59.670 21:29:34 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:59.670 21:29:34 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:59.670 21:29:34 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:59.670 21:29:34 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:59.670 21:29:34 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:59.670 21:29:34 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:59.670 21:29:34 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:59.670 21:29:34 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:59.670 21:29:34 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:59.670 21:29:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:59.670 21:29:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:59.670 21:29:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:59.670 21:29:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:59.670 21:29:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:59.670 21:29:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:59.670 21:29:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@104 -- # continue 2 00:25:59.671 21:29:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@104 -- # continue 2 00:25:59.671 21:29:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:59.671 21:29:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:59.671 21:29:34 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:59.671 21:29:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:59.671 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:59.671 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:59.671 altname enp217s0f0np0 00:25:59.671 altname ens818f0np0 00:25:59.671 inet 192.168.100.8/24 scope global mlx_0_0 00:25:59.671 valid_lft forever preferred_lft forever 00:25:59.671 21:29:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:59.671 21:29:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:59.671 21:29:34 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:59.671 21:29:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:59.671 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:59.671 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:59.671 altname enp217s0f1np1 00:25:59.671 altname ens818f1np1 00:25:59.671 inet 192.168.100.9/24 scope global mlx_0_1 00:25:59.671 valid_lft forever preferred_lft forever 00:25:59.671 21:29:34 -- nvmf/common.sh@410 -- # return 0 00:25:59.671 21:29:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:59.671 21:29:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:59.671 21:29:34 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:59.671 21:29:34 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:59.671 21:29:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:59.671 21:29:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:59.671 21:29:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:59.671 21:29:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:59.671 21:29:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:59.671 21:29:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@104 -- # continue 2 00:25:59.671 21:29:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:59.671 21:29:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:59.671 21:29:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@104 -- # continue 2 00:25:59.671 21:29:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:59.671 21:29:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:59.671 21:29:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:59.671 21:29:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:59.671 21:29:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:59.671 21:29:34 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:59.671 192.168.100.9' 00:25:59.671 21:29:34 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:59.671 192.168.100.9' 00:25:59.671 21:29:34 -- nvmf/common.sh@445 -- # head -n 1 00:25:59.671 21:29:34 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:59.671 21:29:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:59.671 192.168.100.9' 00:25:59.671 21:29:34 -- nvmf/common.sh@446 -- # tail -n +2 00:25:59.671 21:29:34 -- nvmf/common.sh@446 -- # head -n 1 00:25:59.671 21:29:34 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:59.671 21:29:34 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:59.671 21:29:34 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:59.671 21:29:34 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:59.671 21:29:34 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:59.671 21:29:34 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:59.671 21:29:34 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:59.671 21:29:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:59.671 21:29:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:59.671 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:25:59.671 21:29:34 -- nvmf/common.sh@469 -- # nvmfpid=1801835 00:25:59.671 21:29:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:59.671 21:29:34 -- nvmf/common.sh@470 -- # waitforlisten 1801835 00:25:59.671 21:29:34 -- common/autotest_common.sh@819 -- # '[' -z 1801835 ']' 00:25:59.671 21:29:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.671 21:29:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:59.671 21:29:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.671 21:29:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:59.671 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:25:59.671 [2024-07-26 21:29:34.430766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:59.671 [2024-07-26 21:29:34.430831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.671 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.671 [2024-07-26 21:29:34.515877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.930 [2024-07-26 21:29:34.553340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:59.930 [2024-07-26 21:29:34.553450] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.930 [2024-07-26 21:29:34.553459] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.930 [2024-07-26 21:29:34.553467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.930 [2024-07-26 21:29:34.553521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.930 [2024-07-26 21:29:34.553618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.930 [2024-07-26 21:29:34.553683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:59.930 [2024-07-26 21:29:34.553685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.498 21:29:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:00.498 21:29:35 -- common/autotest_common.sh@852 -- # return 0 00:26:00.498 21:29:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:00.498 21:29:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:00.498 21:29:35 -- common/autotest_common.sh@10 -- # set +x 00:26:00.498 21:29:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.498 21:29:35 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:00.498 21:29:35 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:03.789 21:29:38 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:03.789 21:29:38 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:03.789 21:29:38 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:26:03.789 21:29:38 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:04.086 21:29:38 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:04.086 21:29:38 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:26:04.086 21:29:38 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:04.086 21:29:38 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:26:04.086 21:29:38 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:26:04.086 [2024-07-26 21:29:38.845771] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:26:04.086 [2024-07-26 21:29:38.867782] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14f1940/0x14ff700) succeed. 00:26:04.086 [2024-07-26 21:29:38.878308] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14f2f30/0x159f800) succeed. 00:26:04.368 21:29:38 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.368 21:29:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:04.368 21:29:39 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.628 21:29:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:04.628 21:29:39 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:04.887 21:29:39 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:04.887 [2024-07-26 21:29:39.663185] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:04.887 21:29:39 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:05.147 21:29:39 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:26:05.147 21:29:39 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:26:05.147 21:29:39 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:05.147 21:29:39 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:26:06.526 Initializing NVMe Controllers 00:26:06.526 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:26:06.526 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:26:06.526 Initialization complete. Launching workers. 00:26:06.526 ======================================================== 00:26:06.526 Latency(us) 00:26:06.526 Device Information : IOPS MiB/s Average min max 00:26:06.526 PCIE (0000:d8:00.0) NSID 1 from core 0: 102593.15 400.75 311.46 9.99 6191.21 00:26:06.526 ======================================================== 00:26:06.526 Total : 102593.15 400.75 311.46 9.99 6191.21 00:26:06.526 00:26:06.526 21:29:41 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:06.526 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.812 Initializing NVMe Controllers 00:26:09.812 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:09.812 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:09.812 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:09.812 Initialization complete. Launching workers. 00:26:09.812 ======================================================== 00:26:09.812 Latency(us) 00:26:09.812 Device Information : IOPS MiB/s Average min max 00:26:09.812 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6855.57 26.78 144.50 47.65 5039.82 00:26:09.812 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5253.64 20.52 190.14 67.43 5056.27 00:26:09.812 ======================================================== 00:26:09.812 Total : 12109.21 47.30 164.30 47.65 5056.27 00:26:09.812 00:26:09.812 21:29:44 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:09.812 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.103 Initializing NVMe Controllers 00:26:13.103 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.103 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:13.103 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:13.103 Initialization complete. Launching workers. 00:26:13.103 ======================================================== 00:26:13.103 Latency(us) 00:26:13.103 Device Information : IOPS MiB/s Average min max 00:26:13.103 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19233.50 75.13 1664.16 450.11 8072.71 00:26:13.103 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3871.90 15.12 8308.88 7753.91 16156.76 00:26:13.103 ======================================================== 00:26:13.103 Total : 23105.40 90.26 2777.65 450.11 16156.76 00:26:13.103 00:26:13.103 21:29:47 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:26:13.103 21:29:47 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:13.362 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.554 Initializing NVMe Controllers 00:26:17.554 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.554 Controller IO queue size 128, less than required. 00:26:17.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.554 Controller IO queue size 128, less than required. 00:26:17.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.554 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:17.554 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:17.554 Initialization complete. Launching workers. 00:26:17.554 ======================================================== 00:26:17.554 Latency(us) 00:26:17.554 Device Information : IOPS MiB/s Average min max 00:26:17.554 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4100.00 1025.00 31378.90 11760.78 68302.18 00:26:17.554 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4144.50 1036.12 30695.83 15045.97 50416.54 00:26:17.554 ======================================================== 00:26:17.554 Total : 8244.50 2061.12 31035.52 11760.78 68302.18 00:26:17.554 00:26:17.554 21:29:52 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:26:17.554 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.813 No valid NVMe controllers or AIO or URING devices found 00:26:17.813 Initializing NVMe Controllers 00:26:17.813 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.813 Controller IO queue size 128, less than required. 00:26:17.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.813 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:17.813 Controller IO queue size 128, less than required. 00:26:17.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.813 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:17.813 WARNING: Some requested NVMe devices were skipped 00:26:17.813 21:29:52 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:26:18.071 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.263 Initializing NVMe Controllers 00:26:22.263 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.263 Controller IO queue size 128, less than required. 00:26:22.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.263 Controller IO queue size 128, less than required. 00:26:22.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.263 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.263 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:22.263 Initialization complete. Launching workers. 00:26:22.263 00:26:22.263 ==================== 00:26:22.263 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:22.263 RDMA transport: 00:26:22.263 dev name: mlx5_0 00:26:22.263 polls: 420776 00:26:22.263 idle_polls: 417002 00:26:22.263 completions: 45901 00:26:22.263 queued_requests: 1 00:26:22.263 total_send_wrs: 23014 00:26:22.263 send_doorbell_updates: 3577 00:26:22.263 total_recv_wrs: 23014 00:26:22.263 recv_doorbell_updates: 3577 00:26:22.263 --------------------------------- 00:26:22.263 00:26:22.263 ==================== 00:26:22.263 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:22.263 RDMA transport: 00:26:22.263 dev name: mlx5_0 00:26:22.263 polls: 423032 00:26:22.263 idle_polls: 422752 00:26:22.263 completions: 20299 00:26:22.263 queued_requests: 1 00:26:22.263 total_send_wrs: 10213 00:26:22.263 send_doorbell_updates: 256 00:26:22.263 total_recv_wrs: 10213 00:26:22.263 recv_doorbell_updates: 256 00:26:22.263 --------------------------------- 00:26:22.263 ======================================================== 00:26:22.263 Latency(us) 00:26:22.263 Device Information : IOPS MiB/s Average min max 00:26:22.263 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5785.50 1446.38 22196.57 11340.73 57090.91 00:26:22.263 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2585.00 646.25 49500.97 30335.18 72009.21 00:26:22.263 ======================================================== 00:26:22.263 Total : 8370.50 2092.62 30628.79 11340.73 72009.21 00:26:22.263 00:26:22.263 21:29:57 -- host/perf.sh@66 -- # sync 00:26:22.263 21:29:57 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.521 21:29:57 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:22.521 21:29:57 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:26:22.521 21:29:57 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:29.089 21:30:03 -- host/perf.sh@72 -- # ls_guid=e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747 00:26:29.089 21:30:03 -- host/perf.sh@73 -- # get_lvs_free_mb e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747 00:26:29.089 21:30:03 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747 00:26:29.089 21:30:03 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:29.089 21:30:03 -- common/autotest_common.sh@1345 -- # local fc 00:26:29.089 21:30:03 -- common/autotest_common.sh@1346 -- # local cs 00:26:29.089 21:30:03 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:29.089 21:30:03 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:29.089 { 00:26:29.089 "uuid": "e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747", 00:26:29.089 "name": "lvs_0", 00:26:29.089 "base_bdev": "Nvme0n1", 00:26:29.089 "total_data_clusters": 476466, 00:26:29.089 "free_clusters": 476466, 00:26:29.089 "block_size": 512, 00:26:29.089 "cluster_size": 4194304 00:26:29.089 } 00:26:29.089 ]' 00:26:29.089 21:30:03 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747") .free_clusters' 00:26:29.089 21:30:03 -- common/autotest_common.sh@1348 -- # fc=476466 00:26:29.089 21:30:03 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747") .cluster_size' 00:26:29.089 21:30:03 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:29.089 21:30:03 -- common/autotest_common.sh@1352 -- # free_mb=1905864 00:26:29.089 21:30:03 -- common/autotest_common.sh@1353 -- # echo 1905864 00:26:29.089 1905864 00:26:29.089 21:30:03 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:26:29.089 21:30:03 -- host/perf.sh@78 -- # free_mb=20480 00:26:29.089 21:30:03 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747 lbd_0 20480 00:26:29.349 21:30:03 -- host/perf.sh@80 -- # lb_guid=186e7f41-68ae-4fe5-887a-2d099787df34 00:26:29.349 21:30:03 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 186e7f41-68ae-4fe5-887a-2d099787df34 lvs_n_0 00:26:31.252 21:30:05 -- host/perf.sh@83 -- # ls_nested_guid=cdb70bb9-38d5-45aa-b9b7-f61e87a1a598 00:26:31.252 21:30:05 -- host/perf.sh@84 -- # get_lvs_free_mb cdb70bb9-38d5-45aa-b9b7-f61e87a1a598 00:26:31.252 21:30:05 -- common/autotest_common.sh@1343 -- # local lvs_uuid=cdb70bb9-38d5-45aa-b9b7-f61e87a1a598 00:26:31.252 21:30:05 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:31.252 21:30:05 -- common/autotest_common.sh@1345 -- # local fc 00:26:31.252 21:30:05 -- common/autotest_common.sh@1346 -- # local cs 00:26:31.252 21:30:05 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:31.513 21:30:06 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:31.513 { 00:26:31.513 "uuid": "e527dfc4-4cb4-4b5b-9c6f-95f1ab5d8747", 00:26:31.513 "name": "lvs_0", 00:26:31.513 "base_bdev": "Nvme0n1", 00:26:31.513 "total_data_clusters": 476466, 00:26:31.513 "free_clusters": 471346, 00:26:31.513 "block_size": 512, 00:26:31.513 "cluster_size": 4194304 00:26:31.513 }, 00:26:31.513 { 00:26:31.513 "uuid": "cdb70bb9-38d5-45aa-b9b7-f61e87a1a598", 00:26:31.513 "name": "lvs_n_0", 00:26:31.513 "base_bdev": "186e7f41-68ae-4fe5-887a-2d099787df34", 00:26:31.513 "total_data_clusters": 5114, 00:26:31.513 "free_clusters": 5114, 00:26:31.513 "block_size": 512, 00:26:31.513 "cluster_size": 4194304 00:26:31.513 } 00:26:31.513 ]' 00:26:31.513 21:30:06 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="cdb70bb9-38d5-45aa-b9b7-f61e87a1a598") .free_clusters' 00:26:31.513 21:30:06 -- common/autotest_common.sh@1348 -- # fc=5114 00:26:31.513 21:30:06 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="cdb70bb9-38d5-45aa-b9b7-f61e87a1a598") .cluster_size' 00:26:31.513 21:30:06 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:31.513 21:30:06 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:26:31.513 21:30:06 -- common/autotest_common.sh@1353 -- # echo 20456 00:26:31.513 20456 00:26:31.513 21:30:06 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:31.513 21:30:06 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cdb70bb9-38d5-45aa-b9b7-f61e87a1a598 lbd_nest_0 20456 00:26:31.837 21:30:06 -- host/perf.sh@88 -- # lb_nested_guid=239218a5-0835-4c73-85b5-3a94890b7b95 00:26:31.837 21:30:06 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:31.837 21:30:06 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:31.837 21:30:06 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 239218a5-0835-4c73-85b5-3a94890b7b95 00:26:32.096 21:30:06 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:32.096 21:30:06 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:32.096 21:30:06 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:32.096 21:30:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:32.096 21:30:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:32.096 21:30:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:32.096 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.309 Initializing NVMe Controllers 00:26:44.309 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.309 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.309 Initialization complete. Launching workers. 00:26:44.309 ======================================================== 00:26:44.309 Latency(us) 00:26:44.309 Device Information : IOPS MiB/s Average min max 00:26:44.309 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5935.00 2.90 168.08 67.64 5043.90 00:26:44.309 ======================================================== 00:26:44.309 Total : 5935.00 2.90 168.08 67.64 5043.90 00:26:44.309 00:26:44.309 21:30:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.309 21:30:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:44.309 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.531 Initializing NVMe Controllers 00:26:56.531 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.531 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.531 Initialization complete. Launching workers. 00:26:56.531 ======================================================== 00:26:56.531 Latency(us) 00:26:56.531 Device Information : IOPS MiB/s Average min max 00:26:56.531 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2674.55 334.32 373.25 155.26 7112.74 00:26:56.531 ======================================================== 00:26:56.531 Total : 2674.55 334.32 373.25 155.26 7112.74 00:26:56.531 00:26:56.531 21:30:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:56.531 21:30:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:56.531 21:30:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:56.531 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.507 Initializing NVMe Controllers 00:27:06.507 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.507 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:06.507 Initialization complete. Launching workers. 00:27:06.507 ======================================================== 00:27:06.507 Latency(us) 00:27:06.507 Device Information : IOPS MiB/s Average min max 00:27:06.507 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11973.70 5.85 2672.03 926.26 9089.01 00:27:06.507 ======================================================== 00:27:06.507 Total : 11973.70 5.85 2672.03 926.26 9089.01 00:27:06.507 00:27:06.507 21:30:41 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:06.507 21:30:41 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:06.507 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.717 Initializing NVMe Controllers 00:27:18.717 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.717 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:18.717 Initialization complete. Launching workers. 00:27:18.717 ======================================================== 00:27:18.717 Latency(us) 00:27:18.717 Device Information : IOPS MiB/s Average min max 00:27:18.717 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4001.07 500.13 7995.94 5898.05 16005.29 00:27:18.717 ======================================================== 00:27:18.717 Total : 4001.07 500.13 7995.94 5898.05 16005.29 00:27:18.717 00:27:18.717 21:30:52 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:18.717 21:30:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:18.717 21:30:52 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:18.717 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.959 Initializing NVMe Controllers 00:27:30.959 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.959 Controller IO queue size 128, less than required. 00:27:30.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.959 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:30.959 Initialization complete. Launching workers. 00:27:30.959 ======================================================== 00:27:30.959 Latency(us) 00:27:30.959 Device Information : IOPS MiB/s Average min max 00:27:30.959 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19537.04 9.54 6551.43 1751.08 16146.68 00:27:30.959 ======================================================== 00:27:30.959 Total : 19537.04 9.54 6551.43 1751.08 16146.68 00:27:30.959 00:27:30.959 21:31:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:30.959 21:31:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:30.959 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.937 Initializing NVMe Controllers 00:27:40.937 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:40.937 Controller IO queue size 128, less than required. 00:27:40.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.937 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:40.937 Initialization complete. Launching workers. 00:27:40.937 ======================================================== 00:27:40.937 Latency(us) 00:27:40.937 Device Information : IOPS MiB/s Average min max 00:27:40.937 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11219.30 1402.41 11404.45 3379.38 23852.67 00:27:40.937 ======================================================== 00:27:40.937 Total : 11219.30 1402.41 11404.45 3379.38 23852.67 00:27:40.937 00:27:40.937 21:31:15 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.937 21:31:15 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 239218a5-0835-4c73-85b5-3a94890b7b95 00:27:41.196 21:31:15 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:41.196 21:31:16 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 186e7f41-68ae-4fe5-887a-2d099787df34 00:27:41.454 21:31:16 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:41.713 21:31:16 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:41.713 21:31:16 -- host/perf.sh@114 -- # nvmftestfini 00:27:41.713 21:31:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:41.713 21:31:16 -- nvmf/common.sh@116 -- # sync 00:27:41.713 21:31:16 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:41.713 21:31:16 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:41.713 21:31:16 -- nvmf/common.sh@119 -- # set +e 00:27:41.713 21:31:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:41.713 21:31:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:41.713 rmmod nvme_rdma 00:27:41.713 rmmod nvme_fabrics 00:27:41.713 21:31:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:41.713 21:31:16 -- nvmf/common.sh@123 -- # set -e 00:27:41.713 21:31:16 -- nvmf/common.sh@124 -- # return 0 00:27:41.713 21:31:16 -- nvmf/common.sh@477 -- # '[' -n 1801835 ']' 00:27:41.713 21:31:16 -- nvmf/common.sh@478 -- # killprocess 1801835 00:27:41.713 21:31:16 -- common/autotest_common.sh@926 -- # '[' -z 1801835 ']' 00:27:41.713 21:31:16 -- common/autotest_common.sh@930 -- # kill -0 1801835 00:27:41.713 21:31:16 -- common/autotest_common.sh@931 -- # uname 00:27:41.713 21:31:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:41.713 21:31:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1801835 00:27:41.713 21:31:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:41.713 21:31:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:41.713 21:31:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1801835' 00:27:41.713 killing process with pid 1801835 00:27:41.713 21:31:16 -- common/autotest_common.sh@945 -- # kill 1801835 00:27:41.713 21:31:16 -- common/autotest_common.sh@950 -- # wait 1801835 00:27:44.249 21:31:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:44.249 21:31:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:44.249 00:27:44.249 real 1m52.742s 00:27:44.249 user 7m1.125s 00:27:44.249 sys 0m8.224s 00:27:44.249 21:31:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.249 21:31:18 -- common/autotest_common.sh@10 -- # set +x 00:27:44.249 ************************************ 00:27:44.249 END TEST nvmf_perf 00:27:44.249 ************************************ 00:27:44.249 21:31:19 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:44.249 21:31:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:44.249 21:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.249 21:31:19 -- common/autotest_common.sh@10 -- # set +x 00:27:44.249 ************************************ 00:27:44.249 START TEST nvmf_fio_host 00:27:44.249 ************************************ 00:27:44.249 21:31:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:44.249 * Looking for test storage... 00:27:44.249 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:44.249 21:31:19 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:44.249 21:31:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.249 21:31:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.249 21:31:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.249 21:31:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.249 21:31:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.249 21:31:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.249 21:31:19 -- paths/export.sh@5 -- # export PATH 00:27:44.249 21:31:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.249 21:31:19 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.249 21:31:19 -- nvmf/common.sh@7 -- # uname -s 00:27:44.509 21:31:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.509 21:31:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.509 21:31:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.509 21:31:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.509 21:31:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.509 21:31:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.509 21:31:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.509 21:31:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.509 21:31:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.509 21:31:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.509 21:31:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:44.509 21:31:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:44.509 21:31:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.509 21:31:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.509 21:31:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.509 21:31:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:44.509 21:31:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.509 21:31:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.509 21:31:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.509 21:31:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.509 21:31:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.510 21:31:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.510 21:31:19 -- paths/export.sh@5 -- # export PATH 00:27:44.510 21:31:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.510 21:31:19 -- nvmf/common.sh@46 -- # : 0 00:27:44.510 21:31:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:44.510 21:31:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:44.510 21:31:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:44.510 21:31:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.510 21:31:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.510 21:31:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:44.510 21:31:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:44.510 21:31:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:44.510 21:31:19 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:44.510 21:31:19 -- host/fio.sh@14 -- # nvmftestinit 00:27:44.510 21:31:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:44.510 21:31:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.510 21:31:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:44.510 21:31:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:44.510 21:31:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:44.510 21:31:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.510 21:31:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.510 21:31:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.510 21:31:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:44.510 21:31:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:44.510 21:31:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:44.510 21:31:19 -- common/autotest_common.sh@10 -- # set +x 00:27:52.630 21:31:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:52.630 21:31:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:52.630 21:31:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:52.630 21:31:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:52.630 21:31:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:52.630 21:31:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:52.630 21:31:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:52.630 21:31:27 -- nvmf/common.sh@294 -- # net_devs=() 00:27:52.630 21:31:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:52.630 21:31:27 -- nvmf/common.sh@295 -- # e810=() 00:27:52.630 21:31:27 -- nvmf/common.sh@295 -- # local -ga e810 00:27:52.630 21:31:27 -- nvmf/common.sh@296 -- # x722=() 00:27:52.630 21:31:27 -- nvmf/common.sh@296 -- # local -ga x722 00:27:52.630 21:31:27 -- nvmf/common.sh@297 -- # mlx=() 00:27:52.630 21:31:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:52.630 21:31:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.630 21:31:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:52.630 21:31:27 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:52.630 21:31:27 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:52.630 21:31:27 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:52.630 21:31:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:52.630 21:31:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.630 21:31:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:52.630 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:52.630 21:31:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:52.630 21:31:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.630 21:31:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:52.630 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:52.630 21:31:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:52.630 21:31:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:52.630 21:31:27 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.630 21:31:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.630 21:31:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.630 21:31:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.630 21:31:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:52.630 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:52.630 21:31:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.630 21:31:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.630 21:31:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.630 21:31:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.630 21:31:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.630 21:31:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:52.630 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:52.630 21:31:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.630 21:31:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:52.630 21:31:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:52.630 21:31:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:52.630 21:31:27 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:52.630 21:31:27 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:52.630 21:31:27 -- nvmf/common.sh@57 -- # uname 00:27:52.630 21:31:27 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:52.630 21:31:27 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:52.630 21:31:27 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:52.630 21:31:27 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:52.630 21:31:27 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:52.630 21:31:27 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:52.630 21:31:27 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:52.630 21:31:27 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:52.630 21:31:27 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:52.630 21:31:27 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:52.630 21:31:27 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:52.630 21:31:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:52.630 21:31:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:52.630 21:31:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:52.631 21:31:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:52.631 21:31:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:52.631 21:31:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@104 -- # continue 2 00:27:52.631 21:31:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@104 -- # continue 2 00:27:52.631 21:31:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:52.631 21:31:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.631 21:31:27 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:52.631 21:31:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:52.631 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:52.631 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:52.631 altname enp217s0f0np0 00:27:52.631 altname ens818f0np0 00:27:52.631 inet 192.168.100.8/24 scope global mlx_0_0 00:27:52.631 valid_lft forever preferred_lft forever 00:27:52.631 21:31:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:52.631 21:31:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.631 21:31:27 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:52.631 21:31:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:52.631 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:52.631 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:52.631 altname enp217s0f1np1 00:27:52.631 altname ens818f1np1 00:27:52.631 inet 192.168.100.9/24 scope global mlx_0_1 00:27:52.631 valid_lft forever preferred_lft forever 00:27:52.631 21:31:27 -- nvmf/common.sh@410 -- # return 0 00:27:52.631 21:31:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:52.631 21:31:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:52.631 21:31:27 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:52.631 21:31:27 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:52.631 21:31:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:52.631 21:31:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:52.631 21:31:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:52.631 21:31:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:52.631 21:31:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:52.631 21:31:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@104 -- # continue 2 00:27:52.631 21:31:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:52.631 21:31:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:52.631 21:31:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@104 -- # continue 2 00:27:52.631 21:31:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:52.631 21:31:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.631 21:31:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:52.631 21:31:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:52.631 21:31:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:52.631 21:31:27 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:52.631 192.168.100.9' 00:27:52.631 21:31:27 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:52.631 192.168.100.9' 00:27:52.631 21:31:27 -- nvmf/common.sh@445 -- # head -n 1 00:27:52.631 21:31:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:52.631 21:31:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:52.631 192.168.100.9' 00:27:52.631 21:31:27 -- nvmf/common.sh@446 -- # tail -n +2 00:27:52.631 21:31:27 -- nvmf/common.sh@446 -- # head -n 1 00:27:52.631 21:31:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:52.631 21:31:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:52.631 21:31:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:52.631 21:31:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:52.631 21:31:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:52.631 21:31:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:52.631 21:31:27 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:52.631 21:31:27 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:52.631 21:31:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:52.631 21:31:27 -- common/autotest_common.sh@10 -- # set +x 00:27:52.631 21:31:27 -- host/fio.sh@24 -- # nvmfpid=1823894 00:27:52.631 21:31:27 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:52.631 21:31:27 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.631 21:31:27 -- host/fio.sh@28 -- # waitforlisten 1823894 00:27:52.631 21:31:27 -- common/autotest_common.sh@819 -- # '[' -z 1823894 ']' 00:27:52.631 21:31:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.631 21:31:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:52.631 21:31:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.631 21:31:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:52.631 21:31:27 -- common/autotest_common.sh@10 -- # set +x 00:27:52.631 [2024-07-26 21:31:27.321364] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:52.631 [2024-07-26 21:31:27.321420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.631 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.631 [2024-07-26 21:31:27.406701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.631 [2024-07-26 21:31:27.445431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:52.631 [2024-07-26 21:31:27.445542] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.631 [2024-07-26 21:31:27.445552] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.631 [2024-07-26 21:31:27.445560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.631 [2024-07-26 21:31:27.445609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.631 [2024-07-26 21:31:27.445651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.631 [2024-07-26 21:31:27.445692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.631 [2024-07-26 21:31:27.445695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.570 21:31:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:53.570 21:31:28 -- common/autotest_common.sh@852 -- # return 0 00:27:53.570 21:31:28 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:53.570 [2024-07-26 21:31:28.279553] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8ea060/0x8ee550) succeed. 00:27:53.570 [2024-07-26 21:31:28.289915] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8eb650/0x92fbe0) succeed. 00:27:53.570 21:31:28 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:53.570 21:31:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:53.570 21:31:28 -- common/autotest_common.sh@10 -- # set +x 00:27:53.829 21:31:28 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:53.829 Malloc1 00:27:53.829 21:31:28 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.088 21:31:28 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:54.348 21:31:29 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:54.348 [2024-07-26 21:31:29.170917] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:54.348 21:31:29 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:54.608 21:31:29 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:54.608 21:31:29 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:54.608 21:31:29 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:54.608 21:31:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:54.608 21:31:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.608 21:31:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:54.608 21:31:29 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:54.608 21:31:29 -- common/autotest_common.sh@1320 -- # shift 00:27:54.608 21:31:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:54.608 21:31:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:54.608 21:31:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:54.608 21:31:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:54.608 21:31:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:54.608 21:31:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:54.608 21:31:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:54.608 21:31:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:54.867 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:54.867 fio-3.35 00:27:54.867 Starting 1 thread 00:27:55.127 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.664 00:27:57.664 test: (groupid=0, jobs=1): err= 0: pid=1824433: Fri Jul 26 21:31:32 2024 00:27:57.664 read: IOPS=18.7k, BW=73.0MiB/s (76.6MB/s)(146MiB/2003msec) 00:27:57.664 slat (nsec): min=1342, max=31088, avg=1473.05, stdev=441.94 00:27:57.664 clat (usec): min=1783, max=6167, avg=3400.56, stdev=92.87 00:27:57.664 lat (usec): min=1797, max=6168, avg=3402.04, stdev=92.81 00:27:57.664 clat percentiles (usec): 00:27:57.664 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3392], 00:27:57.664 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:27:57.664 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3425], 95.00th=[ 3425], 00:27:57.664 | 99.00th=[ 3458], 99.50th=[ 3556], 99.90th=[ 4948], 99.95th=[ 5276], 00:27:57.664 | 99.99th=[ 6128] 00:27:57.664 bw ( KiB/s): min=73016, max=75400, per=99.99%, avg=74762.00, stdev=1165.23, samples=4 00:27:57.664 iops : min=18254, max=18850, avg=18690.50, stdev=291.31, samples=4 00:27:57.664 write: IOPS=18.7k, BW=73.0MiB/s (76.6MB/s)(146MiB/2003msec); 0 zone resets 00:27:57.664 slat (nsec): min=1394, max=17753, avg=1571.49, stdev=452.20 00:27:57.664 clat (usec): min=2535, max=6173, avg=3399.90, stdev=98.18 00:27:57.664 lat (usec): min=2546, max=6174, avg=3401.47, stdev=98.14 00:27:57.664 clat percentiles (usec): 00:27:57.664 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3392], 00:27:57.664 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:27:57.664 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3425], 95.00th=[ 3425], 00:27:57.664 | 99.00th=[ 3458], 99.50th=[ 3654], 99.90th=[ 4948], 99.95th=[ 5276], 00:27:57.664 | 99.99th=[ 6128] 00:27:57.664 bw ( KiB/s): min=73128, max=75424, per=99.97%, avg=74758.00, stdev=1095.76, samples=4 00:27:57.664 iops : min=18282, max=18856, avg=18689.50, stdev=273.94, samples=4 00:27:57.664 lat (msec) : 2=0.01%, 4=99.71%, 10=0.29% 00:27:57.664 cpu : usr=99.50%, sys=0.15%, ctx=15, majf=0, minf=2 00:27:57.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:57.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:57.664 issued rwts: total=37440,37447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:57.664 00:27:57.664 Run status group 0 (all jobs): 00:27:57.664 READ: bw=73.0MiB/s (76.6MB/s), 73.0MiB/s-73.0MiB/s (76.6MB/s-76.6MB/s), io=146MiB (153MB), run=2003-2003msec 00:27:57.664 WRITE: bw=73.0MiB/s (76.6MB/s), 73.0MiB/s-73.0MiB/s (76.6MB/s-76.6MB/s), io=146MiB (153MB), run=2003-2003msec 00:27:57.664 21:31:32 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:57.664 21:31:32 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:57.664 21:31:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:57.664 21:31:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.664 21:31:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:57.664 21:31:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:57.664 21:31:32 -- common/autotest_common.sh@1320 -- # shift 00:27:57.664 21:31:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:57.664 21:31:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:57.664 21:31:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:57.664 21:31:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:57.664 21:31:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:57.664 21:31:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:57.664 21:31:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:57.664 21:31:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:57.664 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:57.664 fio-3.35 00:27:57.664 Starting 1 thread 00:27:57.664 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.230 00:28:00.230 test: (groupid=0, jobs=1): err= 0: pid=1825000: Fri Jul 26 21:31:34 2024 00:28:00.230 read: IOPS=15.0k, BW=234MiB/s (245MB/s)(461MiB/1971msec) 00:28:00.230 slat (nsec): min=2274, max=48546, avg=2644.71, stdev=1099.44 00:28:00.230 clat (usec): min=461, max=8311, avg=1626.40, stdev=1300.59 00:28:00.230 lat (usec): min=464, max=8330, avg=1629.05, stdev=1300.93 00:28:00.230 clat percentiles (usec): 00:28:00.230 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 889], 00:28:00.230 | 30.00th=[ 963], 40.00th=[ 1057], 50.00th=[ 1172], 60.00th=[ 1270], 00:28:00.230 | 70.00th=[ 1401], 80.00th=[ 1614], 90.00th=[ 4555], 95.00th=[ 4752], 00:28:00.230 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7177], 99.95th=[ 7308], 00:28:00.230 | 99.99th=[ 8291] 00:28:00.230 bw ( KiB/s): min=107072, max=121536, per=48.30%, avg=115557.75, stdev=6086.13, samples=4 00:28:00.230 iops : min= 6692, max= 7596, avg=7222.25, stdev=380.36, samples=4 00:28:00.230 write: IOPS=8481, BW=133MiB/s (139MB/s)(235MiB/1771msec); 0 zone resets 00:28:00.230 slat (usec): min=26, max=126, avg=28.85, stdev= 5.11 00:28:00.230 clat (usec): min=3943, max=19490, avg=12127.60, stdev=1798.38 00:28:00.230 lat (usec): min=3971, max=19518, avg=12156.45, stdev=1798.09 00:28:00.230 clat percentiles (usec): 00:28:00.230 | 1.00th=[ 6521], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:28:00.230 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:28:00.230 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14222], 95.00th=[15008], 00:28:00.230 | 99.00th=[16450], 99.50th=[16909], 99.90th=[18482], 99.95th=[19006], 00:28:00.230 | 99.99th=[19530] 00:28:00.230 bw ( KiB/s): min=115040, max=125312, per=88.51%, avg=120100.75, stdev=4498.29, samples=4 00:28:00.230 iops : min= 7190, max= 7832, avg=7506.25, stdev=281.17, samples=4 00:28:00.230 lat (usec) : 500=0.01%, 750=3.15%, 1000=19.69% 00:28:00.230 lat (msec) : 2=33.81%, 4=2.34%, 10=10.53%, 20=30.46% 00:28:00.230 cpu : usr=95.88%, sys=2.18%, ctx=264, majf=0, minf=1 00:28:00.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:00.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:00.230 issued rwts: total=29472,15020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:00.230 00:28:00.230 Run status group 0 (all jobs): 00:28:00.230 READ: bw=234MiB/s (245MB/s), 234MiB/s-234MiB/s (245MB/s-245MB/s), io=461MiB (483MB), run=1971-1971msec 00:28:00.230 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=235MiB (246MB), run=1771-1771msec 00:28:00.230 21:31:34 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.230 21:31:34 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:00.230 21:31:34 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:00.230 21:31:34 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:00.230 21:31:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:00.230 21:31:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:00.230 21:31:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:00.230 21:31:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:00.231 21:31:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:00.231 21:31:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:00.231 21:31:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:28:00.497 21:31:35 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:28:03.787 Nvme0n1 00:28:03.787 21:31:38 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:09.060 21:31:43 -- host/fio.sh@53 -- # ls_guid=d27f6491-36ad-4393-8919-e1b48c7d83e8 00:28:09.060 21:31:43 -- host/fio.sh@54 -- # get_lvs_free_mb d27f6491-36ad-4393-8919-e1b48c7d83e8 00:28:09.060 21:31:43 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d27f6491-36ad-4393-8919-e1b48c7d83e8 00:28:09.060 21:31:43 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:09.060 21:31:43 -- common/autotest_common.sh@1345 -- # local fc 00:28:09.060 21:31:43 -- common/autotest_common.sh@1346 -- # local cs 00:28:09.060 21:31:43 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:09.060 21:31:43 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:09.060 { 00:28:09.060 "uuid": "d27f6491-36ad-4393-8919-e1b48c7d83e8", 00:28:09.060 "name": "lvs_0", 00:28:09.060 "base_bdev": "Nvme0n1", 00:28:09.060 "total_data_clusters": 1862, 00:28:09.060 "free_clusters": 1862, 00:28:09.060 "block_size": 512, 00:28:09.060 "cluster_size": 1073741824 00:28:09.060 } 00:28:09.060 ]' 00:28:09.060 21:31:43 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d27f6491-36ad-4393-8919-e1b48c7d83e8") .free_clusters' 00:28:09.060 21:31:43 -- common/autotest_common.sh@1348 -- # fc=1862 00:28:09.060 21:31:43 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d27f6491-36ad-4393-8919-e1b48c7d83e8") .cluster_size' 00:28:09.060 21:31:43 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:28:09.060 21:31:43 -- common/autotest_common.sh@1352 -- # free_mb=1906688 00:28:09.060 21:31:43 -- common/autotest_common.sh@1353 -- # echo 1906688 00:28:09.060 1906688 00:28:09.060 21:31:43 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:28:09.626 ee49e045-f0a6-4668-a068-b2789a5c55f6 00:28:09.626 21:31:44 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:09.885 21:31:44 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:10.144 21:31:44 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:28:10.144 21:31:44 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:10.144 21:31:44 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:10.144 21:31:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:10.144 21:31:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:10.144 21:31:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:10.144 21:31:44 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:10.144 21:31:44 -- common/autotest_common.sh@1320 -- # shift 00:28:10.144 21:31:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:10.144 21:31:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:10.144 21:31:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:10.144 21:31:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:10.144 21:31:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:10.144 21:31:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:10.144 21:31:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:10.144 21:31:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:10.401 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:10.401 fio-3.35 00:28:10.401 Starting 1 thread 00:28:10.660 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.193 00:28:13.193 test: (groupid=0, jobs=1): err= 0: pid=1827315: Fri Jul 26 21:31:47 2024 00:28:13.193 read: IOPS=9940, BW=38.8MiB/s (40.7MB/s)(77.9MiB/2005msec) 00:28:13.193 slat (nsec): min=1345, max=15076, avg=1439.31, stdev=189.93 00:28:13.193 clat (usec): min=203, max=348006, avg=6385.66, stdev=19507.50 00:28:13.193 lat (usec): min=205, max=348009, avg=6387.10, stdev=19507.53 00:28:13.193 clat percentiles (msec): 00:28:13.193 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:28:13.193 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:28:13.193 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:28:13.193 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 347], 99.95th=[ 347], 00:28:13.193 | 99.99th=[ 347] 00:28:13.193 bw ( KiB/s): min=13688, max=48704, per=99.94%, avg=39738.00, stdev=17368.06, samples=4 00:28:13.193 iops : min= 3422, max=12176, avg=9934.50, stdev=4342.01, samples=4 00:28:13.193 write: IOPS=9963, BW=38.9MiB/s (40.8MB/s)(78.0MiB/2005msec); 0 zone resets 00:28:13.193 slat (nsec): min=1390, max=97223, avg=1544.38, stdev=725.68 00:28:13.193 clat (usec): min=186, max=348361, avg=6352.96, stdev=18948.12 00:28:13.194 lat (usec): min=187, max=348364, avg=6354.51, stdev=18948.18 00:28:13.194 clat percentiles (msec): 00:28:13.194 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:28:13.194 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:28:13.194 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:28:13.194 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 351], 99.95th=[ 351], 00:28:13.194 | 99.99th=[ 351] 00:28:13.194 bw ( KiB/s): min=14176, max=48424, per=99.92%, avg=39820.00, stdev=17096.16, samples=4 00:28:13.194 iops : min= 3544, max=12106, avg=9955.00, stdev=4274.04, samples=4 00:28:13.194 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:28:13.194 lat (msec) : 2=0.04%, 4=0.23%, 10=99.37%, 500=0.32% 00:28:13.194 cpu : usr=99.65%, sys=0.00%, ctx=16, majf=0, minf=2 00:28:13.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:13.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:13.194 issued rwts: total=19930,19976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:13.194 00:28:13.194 Run status group 0 (all jobs): 00:28:13.194 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=77.9MiB (81.6MB), run=2005-2005msec 00:28:13.194 WRITE: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=78.0MiB (81.8MB), run=2005-2005msec 00:28:13.194 21:31:47 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:13.194 21:31:47 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:14.571 21:31:49 -- host/fio.sh@64 -- # ls_nested_guid=130cf646-7ced-4ae0-992d-4f9b8d6979a2 00:28:14.571 21:31:49 -- host/fio.sh@65 -- # get_lvs_free_mb 130cf646-7ced-4ae0-992d-4f9b8d6979a2 00:28:14.571 21:31:49 -- common/autotest_common.sh@1343 -- # local lvs_uuid=130cf646-7ced-4ae0-992d-4f9b8d6979a2 00:28:14.571 21:31:49 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:14.572 21:31:49 -- common/autotest_common.sh@1345 -- # local fc 00:28:14.572 21:31:49 -- common/autotest_common.sh@1346 -- # local cs 00:28:14.572 21:31:49 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:14.572 21:31:49 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:14.572 { 00:28:14.572 "uuid": "d27f6491-36ad-4393-8919-e1b48c7d83e8", 00:28:14.572 "name": "lvs_0", 00:28:14.572 "base_bdev": "Nvme0n1", 00:28:14.572 "total_data_clusters": 1862, 00:28:14.572 "free_clusters": 0, 00:28:14.572 "block_size": 512, 00:28:14.572 "cluster_size": 1073741824 00:28:14.572 }, 00:28:14.572 { 00:28:14.572 "uuid": "130cf646-7ced-4ae0-992d-4f9b8d6979a2", 00:28:14.572 "name": "lvs_n_0", 00:28:14.572 "base_bdev": "ee49e045-f0a6-4668-a068-b2789a5c55f6", 00:28:14.572 "total_data_clusters": 476206, 00:28:14.572 "free_clusters": 476206, 00:28:14.572 "block_size": 512, 00:28:14.572 "cluster_size": 4194304 00:28:14.572 } 00:28:14.572 ]' 00:28:14.572 21:31:49 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="130cf646-7ced-4ae0-992d-4f9b8d6979a2") .free_clusters' 00:28:14.572 21:31:49 -- common/autotest_common.sh@1348 -- # fc=476206 00:28:14.572 21:31:49 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="130cf646-7ced-4ae0-992d-4f9b8d6979a2") .cluster_size' 00:28:14.572 21:31:49 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:14.572 21:31:49 -- common/autotest_common.sh@1352 -- # free_mb=1904824 00:28:14.572 21:31:49 -- common/autotest_common.sh@1353 -- # echo 1904824 00:28:14.572 1904824 00:28:14.572 21:31:49 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:28:15.509 2c0615a0-461e-4670-8228-7db7b15f7798 00:28:15.509 21:31:50 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:15.509 21:31:50 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:15.768 21:31:50 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:28:16.027 21:31:50 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:16.027 21:31:50 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:16.027 21:31:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:16.027 21:31:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.027 21:31:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:16.027 21:31:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.027 21:31:50 -- common/autotest_common.sh@1320 -- # shift 00:28:16.027 21:31:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:16.027 21:31:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:16.027 21:31:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:16.027 21:31:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:16.027 21:31:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:16.027 21:31:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:16.027 21:31:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:16.027 21:31:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:16.287 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:16.287 fio-3.35 00:28:16.287 Starting 1 thread 00:28:16.287 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.823 00:28:18.823 test: (groupid=0, jobs=1): err= 0: pid=1828525: Fri Jul 26 21:31:53 2024 00:28:18.823 read: IOPS=10.6k, BW=41.2MiB/s (43.3MB/s)(82.7MiB/2006msec) 00:28:18.823 slat (nsec): min=1347, max=22074, avg=1451.62, stdev=330.62 00:28:18.823 clat (usec): min=2541, max=10695, avg=5977.81, stdev=162.17 00:28:18.823 lat (usec): min=2563, max=10697, avg=5979.26, stdev=162.13 00:28:18.823 clat percentiles (usec): 00:28:18.823 | 1.00th=[ 5866], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5932], 00:28:18.823 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 5997], 60.00th=[ 5997], 00:28:18.823 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 5997], 95.00th=[ 6063], 00:28:18.823 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 8094], 99.95th=[ 9503], 00:28:18.823 | 99.99th=[ 9634] 00:28:18.823 bw ( KiB/s): min=40480, max=43008, per=100.00%, avg=42256.00, stdev=1198.04, samples=4 00:28:18.823 iops : min=10120, max=10752, avg=10564.00, stdev=299.51, samples=4 00:28:18.823 write: IOPS=10.6k, BW=41.2MiB/s (43.2MB/s)(82.7MiB/2006msec); 0 zone resets 00:28:18.823 slat (nsec): min=1390, max=17456, avg=1557.93, stdev=353.12 00:28:18.823 clat (usec): min=3947, max=10714, avg=6000.50, stdev=183.20 00:28:18.823 lat (usec): min=3952, max=10716, avg=6002.06, stdev=183.16 00:28:18.823 clat percentiles (usec): 00:28:18.823 | 1.00th=[ 5866], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5997], 00:28:18.823 | 30.00th=[ 5997], 40.00th=[ 5997], 50.00th=[ 5997], 60.00th=[ 5997], 00:28:18.823 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 6063], 95.00th=[ 6063], 00:28:18.823 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 9241], 99.95th=[ 9634], 00:28:18.823 | 99.99th=[10683] 00:28:18.823 bw ( KiB/s): min=40944, max=42856, per=99.96%, avg=42218.00, stdev=862.85, samples=4 00:28:18.823 iops : min=10236, max=10714, avg=10554.50, stdev=215.71, samples=4 00:28:18.823 lat (msec) : 4=0.05%, 10=99.93%, 20=0.02% 00:28:18.823 cpu : usr=99.45%, sys=0.20%, ctx=16, majf=0, minf=2 00:28:18.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:18.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.823 issued rwts: total=21183,21181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.823 00:28:18.823 Run status group 0 (all jobs): 00:28:18.823 READ: bw=41.2MiB/s (43.3MB/s), 41.2MiB/s-41.2MiB/s (43.3MB/s-43.3MB/s), io=82.7MiB (86.8MB), run=2006-2006msec 00:28:18.823 WRITE: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=82.7MiB (86.8MB), run=2006-2006msec 00:28:18.823 21:31:53 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:18.823 21:31:53 -- host/fio.sh@74 -- # sync 00:28:18.823 21:31:53 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:26.940 21:32:00 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:26.940 21:32:01 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:32.277 21:32:06 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:32.277 21:32:06 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:35.567 21:32:09 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:35.567 21:32:09 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:35.567 21:32:09 -- host/fio.sh@86 -- # nvmftestfini 00:28:35.567 21:32:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:35.567 21:32:09 -- nvmf/common.sh@116 -- # sync 00:28:35.567 21:32:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:35.567 21:32:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:35.567 21:32:09 -- nvmf/common.sh@119 -- # set +e 00:28:35.567 21:32:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:35.567 21:32:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:35.567 rmmod nvme_rdma 00:28:35.567 rmmod nvme_fabrics 00:28:35.567 21:32:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:35.567 21:32:09 -- nvmf/common.sh@123 -- # set -e 00:28:35.567 21:32:09 -- nvmf/common.sh@124 -- # return 0 00:28:35.567 21:32:09 -- nvmf/common.sh@477 -- # '[' -n 1823894 ']' 00:28:35.567 21:32:09 -- nvmf/common.sh@478 -- # killprocess 1823894 00:28:35.567 21:32:09 -- common/autotest_common.sh@926 -- # '[' -z 1823894 ']' 00:28:35.567 21:32:09 -- common/autotest_common.sh@930 -- # kill -0 1823894 00:28:35.567 21:32:09 -- common/autotest_common.sh@931 -- # uname 00:28:35.567 21:32:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:35.567 21:32:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1823894 00:28:35.567 21:32:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:35.567 21:32:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:35.567 21:32:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1823894' 00:28:35.567 killing process with pid 1823894 00:28:35.567 21:32:09 -- common/autotest_common.sh@945 -- # kill 1823894 00:28:35.567 21:32:09 -- common/autotest_common.sh@950 -- # wait 1823894 00:28:35.567 21:32:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:35.567 21:32:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:35.567 00:28:35.567 real 0m51.257s 00:28:35.567 user 3m37.485s 00:28:35.567 sys 0m8.928s 00:28:35.567 21:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.567 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 ************************************ 00:28:35.567 END TEST nvmf_fio_host 00:28:35.567 ************************************ 00:28:35.567 21:32:10 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:28:35.567 21:32:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:35.567 21:32:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:35.567 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 ************************************ 00:28:35.567 START TEST nvmf_failover 00:28:35.567 ************************************ 00:28:35.567 21:32:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:28:35.567 * Looking for test storage... 00:28:35.567 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:35.567 21:32:10 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.567 21:32:10 -- nvmf/common.sh@7 -- # uname -s 00:28:35.567 21:32:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.567 21:32:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.567 21:32:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.567 21:32:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.567 21:32:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.567 21:32:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.567 21:32:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.567 21:32:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.567 21:32:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.567 21:32:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.827 21:32:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:35.827 21:32:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:35.827 21:32:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.827 21:32:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.827 21:32:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.827 21:32:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:35.827 21:32:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.827 21:32:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.827 21:32:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.827 21:32:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.827 21:32:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.827 21:32:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.827 21:32:10 -- paths/export.sh@5 -- # export PATH 00:28:35.827 21:32:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.827 21:32:10 -- nvmf/common.sh@46 -- # : 0 00:28:35.827 21:32:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:35.827 21:32:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:35.827 21:32:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:35.827 21:32:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.827 21:32:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.827 21:32:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:35.827 21:32:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:35.827 21:32:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:35.827 21:32:10 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:35.827 21:32:10 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:35.827 21:32:10 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:28:35.827 21:32:10 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:35.827 21:32:10 -- host/failover.sh@18 -- # nvmftestinit 00:28:35.827 21:32:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:35.827 21:32:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.827 21:32:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:35.827 21:32:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:35.827 21:32:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:35.827 21:32:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.827 21:32:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.827 21:32:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.827 21:32:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:35.827 21:32:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:35.827 21:32:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:35.827 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:28:43.951 21:32:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:43.951 21:32:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:43.951 21:32:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:43.951 21:32:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:43.951 21:32:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:43.951 21:32:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:43.951 21:32:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:43.951 21:32:18 -- nvmf/common.sh@294 -- # net_devs=() 00:28:43.951 21:32:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:43.951 21:32:18 -- nvmf/common.sh@295 -- # e810=() 00:28:43.951 21:32:18 -- nvmf/common.sh@295 -- # local -ga e810 00:28:43.951 21:32:18 -- nvmf/common.sh@296 -- # x722=() 00:28:43.951 21:32:18 -- nvmf/common.sh@296 -- # local -ga x722 00:28:43.951 21:32:18 -- nvmf/common.sh@297 -- # mlx=() 00:28:43.951 21:32:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:43.951 21:32:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.951 21:32:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:43.951 21:32:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:43.951 21:32:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:43.951 21:32:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:43.951 21:32:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:43.951 21:32:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:43.951 21:32:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:43.951 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:43.951 21:32:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:43.951 21:32:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:43.951 21:32:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:43.951 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:43.951 21:32:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:43.951 21:32:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:43.951 21:32:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:43.951 21:32:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.951 21:32:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:43.951 21:32:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.951 21:32:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:43.951 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:43.951 21:32:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.951 21:32:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:43.951 21:32:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.951 21:32:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:43.951 21:32:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.951 21:32:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:43.951 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:43.951 21:32:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.951 21:32:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:43.951 21:32:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:43.951 21:32:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:43.951 21:32:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:43.951 21:32:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:43.951 21:32:18 -- nvmf/common.sh@57 -- # uname 00:28:43.951 21:32:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:43.951 21:32:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:43.951 21:32:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:43.951 21:32:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:43.951 21:32:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:43.951 21:32:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:43.951 21:32:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:43.951 21:32:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:43.951 21:32:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:43.951 21:32:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:43.951 21:32:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:43.951 21:32:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:43.951 21:32:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:43.951 21:32:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:43.951 21:32:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:43.951 21:32:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:43.951 21:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:43.951 21:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@104 -- # continue 2 00:28:43.952 21:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@104 -- # continue 2 00:28:43.952 21:32:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:43.952 21:32:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:43.952 21:32:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:43.952 21:32:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:43.952 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:43.952 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:43.952 altname enp217s0f0np0 00:28:43.952 altname ens818f0np0 00:28:43.952 inet 192.168.100.8/24 scope global mlx_0_0 00:28:43.952 valid_lft forever preferred_lft forever 00:28:43.952 21:32:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:43.952 21:32:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:43.952 21:32:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:43.952 21:32:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:43.952 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:43.952 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:43.952 altname enp217s0f1np1 00:28:43.952 altname ens818f1np1 00:28:43.952 inet 192.168.100.9/24 scope global mlx_0_1 00:28:43.952 valid_lft forever preferred_lft forever 00:28:43.952 21:32:18 -- nvmf/common.sh@410 -- # return 0 00:28:43.952 21:32:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:43.952 21:32:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:43.952 21:32:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:43.952 21:32:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:43.952 21:32:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:43.952 21:32:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:43.952 21:32:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:43.952 21:32:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:43.952 21:32:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:43.952 21:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@104 -- # continue 2 00:28:43.952 21:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:43.952 21:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:43.952 21:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@104 -- # continue 2 00:28:43.952 21:32:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:43.952 21:32:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:43.952 21:32:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:43.952 21:32:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:43.952 21:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:43.952 21:32:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:43.952 192.168.100.9' 00:28:43.952 21:32:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:43.952 192.168.100.9' 00:28:43.952 21:32:18 -- nvmf/common.sh@445 -- # head -n 1 00:28:43.952 21:32:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:43.952 21:32:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:43.952 192.168.100.9' 00:28:43.952 21:32:18 -- nvmf/common.sh@446 -- # tail -n +2 00:28:43.952 21:32:18 -- nvmf/common.sh@446 -- # head -n 1 00:28:43.952 21:32:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:43.952 21:32:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:43.952 21:32:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:43.952 21:32:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:43.952 21:32:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:43.952 21:32:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:43.952 21:32:18 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:43.952 21:32:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:43.952 21:32:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:43.952 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:28:43.952 21:32:18 -- nvmf/common.sh@469 -- # nvmfpid=1835678 00:28:43.952 21:32:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:43.952 21:32:18 -- nvmf/common.sh@470 -- # waitforlisten 1835678 00:28:43.952 21:32:18 -- common/autotest_common.sh@819 -- # '[' -z 1835678 ']' 00:28:43.952 21:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.952 21:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:43.952 21:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.952 21:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:43.952 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:28:44.211 [2024-07-26 21:32:18.858053] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:44.211 [2024-07-26 21:32:18.858100] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.211 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.211 [2024-07-26 21:32:18.941139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.212 [2024-07-26 21:32:18.978364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:44.212 [2024-07-26 21:32:18.978476] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.212 [2024-07-26 21:32:18.978486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.212 [2024-07-26 21:32:18.978495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.212 [2024-07-26 21:32:18.978539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.212 [2024-07-26 21:32:18.978621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.212 [2024-07-26 21:32:18.978623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.780 21:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:44.780 21:32:19 -- common/autotest_common.sh@852 -- # return 0 00:28:44.780 21:32:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:44.780 21:32:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:44.780 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:28:45.039 21:32:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.039 21:32:19 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:45.039 [2024-07-26 21:32:19.868728] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x128c860/0x1290d50) succeed. 00:28:45.039 [2024-07-26 21:32:19.878886] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x128ddb0/0x12d23e0) succeed. 00:28:45.299 21:32:20 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:45.558 Malloc0 00:28:45.558 21:32:20 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.558 21:32:20 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.817 21:32:20 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:46.076 [2024-07-26 21:32:20.696591] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:46.076 21:32:20 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:46.076 [2024-07-26 21:32:20.872957] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:46.076 21:32:20 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:46.336 [2024-07-26 21:32:21.041537] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:46.336 21:32:21 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:46.336 21:32:21 -- host/failover.sh@31 -- # bdevperf_pid=1835984 00:28:46.336 21:32:21 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.336 21:32:21 -- host/failover.sh@34 -- # waitforlisten 1835984 /var/tmp/bdevperf.sock 00:28:46.336 21:32:21 -- common/autotest_common.sh@819 -- # '[' -z 1835984 ']' 00:28:46.336 21:32:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:46.336 21:32:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:46.336 21:32:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:46.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:46.336 21:32:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:46.336 21:32:21 -- common/autotest_common.sh@10 -- # set +x 00:28:47.273 21:32:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:47.273 21:32:21 -- common/autotest_common.sh@852 -- # return 0 00:28:47.273 21:32:21 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:47.532 NVMe0n1 00:28:47.532 21:32:22 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:47.532 00:28:47.792 21:32:22 -- host/failover.sh@39 -- # run_test_pid=1836256 00:28:47.792 21:32:22 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:47.792 21:32:22 -- host/failover.sh@41 -- # sleep 1 00:28:48.727 21:32:23 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:48.727 21:32:23 -- host/failover.sh@45 -- # sleep 3 00:28:52.015 21:32:26 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:52.016 00:28:52.016 21:32:26 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:52.274 21:32:27 -- host/failover.sh@50 -- # sleep 3 00:28:55.645 21:32:30 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:55.645 [2024-07-26 21:32:30.174023] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:55.645 21:32:30 -- host/failover.sh@55 -- # sleep 1 00:28:56.583 21:32:31 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:56.583 21:32:31 -- host/failover.sh@59 -- # wait 1836256 00:29:03.155 0 00:29:03.155 21:32:37 -- host/failover.sh@61 -- # killprocess 1835984 00:29:03.155 21:32:37 -- common/autotest_common.sh@926 -- # '[' -z 1835984 ']' 00:29:03.155 21:32:37 -- common/autotest_common.sh@930 -- # kill -0 1835984 00:29:03.155 21:32:37 -- common/autotest_common.sh@931 -- # uname 00:29:03.155 21:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.155 21:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1835984 00:29:03.155 21:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:03.155 21:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:03.155 21:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1835984' 00:29:03.155 killing process with pid 1835984 00:29:03.155 21:32:37 -- common/autotest_common.sh@945 -- # kill 1835984 00:29:03.155 21:32:37 -- common/autotest_common.sh@950 -- # wait 1835984 00:29:03.155 21:32:37 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:03.155 [2024-07-26 21:32:21.098567] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:03.155 [2024-07-26 21:32:21.098633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835984 ] 00:29:03.155 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.155 [2024-07-26 21:32:21.184495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.155 [2024-07-26 21:32:21.221508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.155 Running I/O for 15 seconds... 00:29:03.155 [2024-07-26 21:32:24.568161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182900 00:29:03.155 [2024-07-26 21:32:24.568207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.155 [2024-07-26 21:32:24.568225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x184300 00:29:03.155 [2024-07-26 21:32:24.568235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.155 [2024-07-26 21:32:24.568247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x184300 00:29:03.155 [2024-07-26 21:32:24.568257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.568932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182900 00:29:03.156 [2024-07-26 21:32:24.568971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.568982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.568991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.569001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x184300 00:29:03.156 [2024-07-26 21:32:24.569010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.569020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.156 [2024-07-26 21:32:24.569030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.156 [2024-07-26 21:32:24.569041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.157 [2024-07-26 21:32:24.569780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x184300 00:29:03.157 [2024-07-26 21:32:24.569801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.157 [2024-07-26 21:32:24.569812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182900 00:29:03.157 [2024-07-26 21:32:24.569823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.569844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.569866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.569888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.569911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.569930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.569950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.569976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.569987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.569997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182900 00:29:03.158 [2024-07-26 21:32:24.570534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.158 [2024-07-26 21:32:24.570554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x184300 00:29:03.158 [2024-07-26 21:32:24.570574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.158 [2024-07-26 21:32:24.570584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:24.570594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x184300 00:29:03.159 [2024-07-26 21:32:24.570616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x184300 00:29:03.159 [2024-07-26 21:32:24.570639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:24.570660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:24.570680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x184300 00:29:03.159 [2024-07-26 21:32:24.570700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:24.570721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:24.570741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:24.570760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:24.570781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:24.570807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:24.570827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:24.570848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:24.570871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.570882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:24.570891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25882 cdw0:fe0dd000 sqhd:9322 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.572761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:03.159 [2024-07-26 21:32:24.572776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:03.159 [2024-07-26 21:32:24.572785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87248 len:8 PRP1 0x0 PRP2 0x0 00:29:03.159 [2024-07-26 21:32:24.572795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:24.572836] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:29:03.159 [2024-07-26 21:32:24.572853] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:29:03.159 [2024-07-26 21:32:24.572864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.159 [2024-07-26 21:32:24.574733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.159 [2024-07-26 21:32:24.589163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.159 [2024-07-26 21:32:24.622768] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:03.159 [2024-07-26 21:32:28.001493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.159 [2024-07-26 21:32:28.001538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25884 cdw0:7e6b37f0 sqhd:d99a p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.001550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.159 [2024-07-26 21:32:28.001560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25884 cdw0:7e6b37f0 sqhd:d99a p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.001570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.159 [2024-07-26 21:32:28.001580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25884 cdw0:7e6b37f0 sqhd:d99a p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.001589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.159 [2024-07-26 21:32:28.001599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25884 cdw0:7e6b37f0 sqhd:d99a p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.159 [2024-07-26 21:32:28.003351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.159 [2024-07-26 21:32:28.003362] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:29:03.159 [2024-07-26 21:32:28.003373] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:03.159 [2024-07-26 21:32:28.003390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:28.003406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:28.003446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:28.003473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:28.003515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:28.003542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:28.003569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.159 [2024-07-26 21:32:28.003611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x183700 00:29:03.159 [2024-07-26 21:32:28.003643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:28.003670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x183700 00:29:03.159 [2024-07-26 21:32:28.003712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x183700 00:29:03.159 [2024-07-26 21:32:28.003738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182900 00:29:03.159 [2024-07-26 21:32:28.003780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.159 [2024-07-26 21:32:28.003816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.003827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.003844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.003854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.003871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.003881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.003898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.003908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.003924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.003934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.003950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.003960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.003977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.003987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.004028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.004054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.004152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.004207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.004247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.004274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.004440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182900 00:29:03.160 [2024-07-26 21:32:28.004481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.004509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.004564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.004606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.160 [2024-07-26 21:32:28.004732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x183700 00:29:03.160 [2024-07-26 21:32:28.004773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.160 [2024-07-26 21:32:28.004790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.004800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.004816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.004826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.004843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.004853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.004869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.004879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.004911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.004921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.004937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.004947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.004966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.004977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.004993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.005811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.005854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.161 [2024-07-26 21:32:28.005962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.005993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.006003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.006035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.006045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.006062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182900 00:29:03.161 [2024-07-26 21:32:28.006073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.161 [2024-07-26 21:32:28.006090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x183700 00:29:03.161 [2024-07-26 21:32:28.006100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d5800 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.006291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.006440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.006601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.006826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.006893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.006920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.006973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.006990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.006999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.007024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.007067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.007109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x183700 00:29:03.162 [2024-07-26 21:32:28.007150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.007176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.007217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.007242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.007267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.162 [2024-07-26 21:32:28.007307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.007334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.162 [2024-07-26 21:32:28.007350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182900 00:29:03.162 [2024-07-26 21:32:28.007360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x183700 00:29:03.163 [2024-07-26 21:32:28.007386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:28.007413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:28.007442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x183700 00:29:03.163 [2024-07-26 21:32:28.007483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:28.007510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:28.007536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:28.007563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:28.007589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:28.007615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:28.007662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:28.007703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:28.007745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.007776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x183700 00:29:03.163 [2024-07-26 21:32:28.007787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25884 cdw0:fe0dd000 sqhd:2892 p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.022070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:03.163 [2024-07-26 21:32:28.022092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:03.163 [2024-07-26 21:32:28.022104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49696 len:8 PRP1 0x0 PRP2 0x0 00:29:03.163 [2024-07-26 21:32:28.022115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:28.022163] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:29:03.163 [2024-07-26 21:32:28.022174] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:03.163 [2024-07-26 21:32:28.022200] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:03.163 [2024-07-26 21:32:28.023958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.163 [2024-07-26 21:32:28.053666] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:03.163 [2024-07-26 21:32:32.364328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:32.364371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x184300 00:29:03.163 [2024-07-26 21:32:32.364399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x184300 00:29:03.163 [2024-07-26 21:32:32.364419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x184300 00:29:03.163 [2024-07-26 21:32:32.364461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:32.364481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:32.364501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x184300 00:29:03.163 [2024-07-26 21:32:32.364612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:32.364679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x184300 00:29:03.163 [2024-07-26 21:32:32.364721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.163 [2024-07-26 21:32:32.364743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.163 [2024-07-26 21:32:32.364777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182900 00:29:03.163 [2024-07-26 21:32:32.364787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.364811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.364831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.364851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.364871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.364891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.364912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.364932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.364952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.364972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.364984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.364993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.365034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.164 [2024-07-26 21:32:32.365075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.164 [2024-07-26 21:32:32.365115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.365155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.365175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.164 [2024-07-26 21:32:32.365215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.164 [2024-07-26 21:32:32.365257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.164 [2024-07-26 21:32:32.365319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.164 [2024-07-26 21:32:32.365338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.365359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x184300 00:29:03.164 [2024-07-26 21:32:32.365379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182900 00:29:03.164 [2024-07-26 21:32:32.365419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.164 [2024-07-26 21:32:32.365430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.365479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.365501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.365542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.365562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.365666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.365768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.365929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.365972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.365982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.365991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.366014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.366035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.366055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.366076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.366096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.366116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182900 00:29:03.165 [2024-07-26 21:32:32.366137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.165 [2024-07-26 21:32:32.366157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.165 [2024-07-26 21:32:32.366167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x184300 00:29:03.165 [2024-07-26 21:32:32.366176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x184300 00:29:03.166 [2024-07-26 21:32:32.366776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182900 00:29:03.166 [2024-07-26 21:32:32.366857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.166 [2024-07-26 21:32:32.366919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.166 [2024-07-26 21:32:32.366932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x184300 00:29:03.167 [2024-07-26 21:32:32.366941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.167 [2024-07-26 21:32:32.366951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182900 00:29:03.167 [2024-07-26 21:32:32.366961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.167 [2024-07-26 21:32:32.366972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.167 [2024-07-26 21:32:32.366981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25887 cdw0:fe0dd000 sqhd:489e p:1 m:0 dnr:0 00:29:03.167 [2024-07-26 21:32:32.368830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:03.167 [2024-07-26 21:32:32.368844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:03.167 [2024-07-26 21:32:32.368852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76624 len:8 PRP1 0x0 PRP2 0x0 00:29:03.167 [2024-07-26 21:32:32.368862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.167 [2024-07-26 21:32:32.368903] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:29:03.167 [2024-07-26 21:32:32.368915] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:29:03.167 [2024-07-26 21:32:32.368925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.167 [2024-07-26 21:32:32.370441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.167 [2024-07-26 21:32:32.384310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:03.167 [2024-07-26 21:32:32.415958] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:03.167 00:29:03.167 Latency(us) 00:29:03.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.167 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:03.167 Verification LBA range: start 0x0 length 0x4000 00:29:03.167 NVMe0n1 : 15.00 20032.28 78.25 295.65 0.00 6284.18 353.89 1033476.51 00:29:03.167 =================================================================================================================== 00:29:03.167 Total : 20032.28 78.25 295.65 0.00 6284.18 353.89 1033476.51 00:29:03.167 Received shutdown signal, test time was about 15.000000 seconds 00:29:03.167 00:29:03.167 Latency(us) 00:29:03.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.167 =================================================================================================================== 00:29:03.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.167 21:32:37 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:03.167 21:32:37 -- host/failover.sh@65 -- # count=3 00:29:03.167 21:32:37 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:03.167 21:32:37 -- host/failover.sh@73 -- # bdevperf_pid=1838857 00:29:03.167 21:32:37 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:03.167 21:32:37 -- host/failover.sh@75 -- # waitforlisten 1838857 /var/tmp/bdevperf.sock 00:29:03.167 21:32:37 -- common/autotest_common.sh@819 -- # '[' -z 1838857 ']' 00:29:03.167 21:32:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:03.167 21:32:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:03.167 21:32:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:03.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:03.167 21:32:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:03.167 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:29:04.103 21:32:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:04.103 21:32:38 -- common/autotest_common.sh@852 -- # return 0 00:29:04.103 21:32:38 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:04.103 [2024-07-26 21:32:38.784275] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:04.103 21:32:38 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:04.103 [2024-07-26 21:32:38.948806] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:29:04.362 21:32:38 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.362 NVMe0n1 00:29:04.362 21:32:39 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.621 00:29:04.621 21:32:39 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.881 00:29:04.881 21:32:39 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:04.881 21:32:39 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:05.139 21:32:39 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:05.397 21:32:40 -- host/failover.sh@87 -- # sleep 3 00:29:08.684 21:32:43 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:08.684 21:32:43 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:08.684 21:32:43 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:08.684 21:32:43 -- host/failover.sh@90 -- # run_test_pid=1839776 00:29:08.684 21:32:43 -- host/failover.sh@92 -- # wait 1839776 00:29:09.621 0 00:29:09.621 21:32:44 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:09.621 [2024-07-26 21:32:37.847731] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:09.621 [2024-07-26 21:32:37.847791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838857 ] 00:29:09.621 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.621 [2024-07-26 21:32:37.935623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.621 [2024-07-26 21:32:37.968641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.621 [2024-07-26 21:32:40.033657] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:29:09.621 [2024-07-26 21:32:40.034152] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.621 [2024-07-26 21:32:40.034182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.621 [2024-07-26 21:32:40.058517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:09.621 [2024-07-26 21:32:40.073418] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:09.621 Running I/O for 1 seconds... 00:29:09.621 00:29:09.621 Latency(us) 00:29:09.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.621 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:09.621 Verification LBA range: start 0x0 length 0x4000 00:29:09.621 NVMe0n1 : 1.00 25011.48 97.70 0.00 0.00 5093.75 1120.67 17196.65 00:29:09.621 =================================================================================================================== 00:29:09.621 Total : 25011.48 97.70 0.00 0.00 5093.75 1120.67 17196.65 00:29:09.621 21:32:44 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:09.621 21:32:44 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:09.880 21:32:44 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.880 21:32:44 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:09.880 21:32:44 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:10.140 21:32:44 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:10.399 21:32:45 -- host/failover.sh@101 -- # sleep 3 00:29:13.691 21:32:48 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:13.691 21:32:48 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:13.691 21:32:48 -- host/failover.sh@108 -- # killprocess 1838857 00:29:13.691 21:32:48 -- common/autotest_common.sh@926 -- # '[' -z 1838857 ']' 00:29:13.691 21:32:48 -- common/autotest_common.sh@930 -- # kill -0 1838857 00:29:13.691 21:32:48 -- common/autotest_common.sh@931 -- # uname 00:29:13.691 21:32:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:13.691 21:32:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1838857 00:29:13.691 21:32:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:13.691 21:32:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:13.691 21:32:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1838857' 00:29:13.691 killing process with pid 1838857 00:29:13.691 21:32:48 -- common/autotest_common.sh@945 -- # kill 1838857 00:29:13.691 21:32:48 -- common/autotest_common.sh@950 -- # wait 1838857 00:29:13.691 21:32:48 -- host/failover.sh@110 -- # sync 00:29:13.691 21:32:48 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.951 21:32:48 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:13.951 21:32:48 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:13.951 21:32:48 -- host/failover.sh@116 -- # nvmftestfini 00:29:13.951 21:32:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:13.951 21:32:48 -- nvmf/common.sh@116 -- # sync 00:29:13.951 21:32:48 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:13.951 21:32:48 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:13.951 21:32:48 -- nvmf/common.sh@119 -- # set +e 00:29:13.951 21:32:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:13.951 21:32:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:13.951 rmmod nvme_rdma 00:29:13.951 rmmod nvme_fabrics 00:29:13.951 21:32:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:13.951 21:32:48 -- nvmf/common.sh@123 -- # set -e 00:29:13.951 21:32:48 -- nvmf/common.sh@124 -- # return 0 00:29:13.951 21:32:48 -- nvmf/common.sh@477 -- # '[' -n 1835678 ']' 00:29:13.951 21:32:48 -- nvmf/common.sh@478 -- # killprocess 1835678 00:29:13.951 21:32:48 -- common/autotest_common.sh@926 -- # '[' -z 1835678 ']' 00:29:13.951 21:32:48 -- common/autotest_common.sh@930 -- # kill -0 1835678 00:29:13.951 21:32:48 -- common/autotest_common.sh@931 -- # uname 00:29:13.951 21:32:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:13.951 21:32:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1835678 00:29:13.951 21:32:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:13.951 21:32:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:13.951 21:32:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1835678' 00:29:13.951 killing process with pid 1835678 00:29:13.951 21:32:48 -- common/autotest_common.sh@945 -- # kill 1835678 00:29:13.951 21:32:48 -- common/autotest_common.sh@950 -- # wait 1835678 00:29:14.211 21:32:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:14.211 21:32:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:14.211 00:29:14.211 real 0m38.695s 00:29:14.211 user 2m3.039s 00:29:14.211 sys 0m8.794s 00:29:14.211 21:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.211 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:29:14.211 ************************************ 00:29:14.211 END TEST nvmf_failover 00:29:14.211 ************************************ 00:29:14.211 21:32:49 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:29:14.211 21:32:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:14.211 21:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.211 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:29:14.211 ************************************ 00:29:14.211 START TEST nvmf_discovery 00:29:14.211 ************************************ 00:29:14.211 21:32:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:29:14.471 * Looking for test storage... 00:29:14.471 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:14.471 21:32:49 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.471 21:32:49 -- nvmf/common.sh@7 -- # uname -s 00:29:14.471 21:32:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.471 21:32:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.471 21:32:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.471 21:32:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.471 21:32:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.471 21:32:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.471 21:32:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.471 21:32:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.471 21:32:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.471 21:32:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.471 21:32:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:14.471 21:32:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:14.471 21:32:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.471 21:32:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.471 21:32:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.471 21:32:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:14.471 21:32:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.471 21:32:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.471 21:32:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.471 21:32:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.471 21:32:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.471 21:32:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.471 21:32:49 -- paths/export.sh@5 -- # export PATH 00:29:14.471 21:32:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.471 21:32:49 -- nvmf/common.sh@46 -- # : 0 00:29:14.471 21:32:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:14.471 21:32:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:14.471 21:32:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:14.471 21:32:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.471 21:32:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.471 21:32:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:14.471 21:32:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:14.471 21:32:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:14.471 21:32:49 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:29:14.471 21:32:49 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:29:14.471 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:29:14.471 21:32:49 -- host/discovery.sh@13 -- # exit 0 00:29:14.471 00:29:14.471 real 0m0.135s 00:29:14.471 user 0m0.054s 00:29:14.471 sys 0m0.092s 00:29:14.471 21:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.471 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:29:14.471 ************************************ 00:29:14.471 END TEST nvmf_discovery 00:29:14.471 ************************************ 00:29:14.471 21:32:49 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:29:14.471 21:32:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:14.471 21:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.471 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:29:14.471 ************************************ 00:29:14.471 START TEST nvmf_discovery_remove_ifc 00:29:14.471 ************************************ 00:29:14.471 21:32:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:29:14.731 * Looking for test storage... 00:29:14.731 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:14.731 21:32:49 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.731 21:32:49 -- nvmf/common.sh@7 -- # uname -s 00:29:14.731 21:32:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.731 21:32:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.731 21:32:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.731 21:32:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.731 21:32:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.731 21:32:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.731 21:32:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.731 21:32:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.731 21:32:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.731 21:32:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.731 21:32:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:14.731 21:32:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:14.731 21:32:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.731 21:32:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.731 21:32:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.731 21:32:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:14.731 21:32:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.731 21:32:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.731 21:32:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.731 21:32:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.731 21:32:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.731 21:32:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.731 21:32:49 -- paths/export.sh@5 -- # export PATH 00:29:14.731 21:32:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.731 21:32:49 -- nvmf/common.sh@46 -- # : 0 00:29:14.731 21:32:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:14.731 21:32:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:14.731 21:32:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:14.731 21:32:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.731 21:32:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.731 21:32:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:14.731 21:32:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:14.731 21:32:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:14.731 21:32:49 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:29:14.731 21:32:49 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:29:14.731 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:29:14.731 21:32:49 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:29:14.731 00:29:14.731 real 0m0.126s 00:29:14.731 user 0m0.052s 00:29:14.731 sys 0m0.083s 00:29:14.731 21:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.731 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:29:14.731 ************************************ 00:29:14.731 END TEST nvmf_discovery_remove_ifc 00:29:14.732 ************************************ 00:29:14.732 21:32:49 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:29:14.732 21:32:49 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:14.732 21:32:49 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:14.732 21:32:49 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:14.732 21:32:49 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:29:14.732 21:32:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:14.732 21:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.732 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:29:14.732 ************************************ 00:29:14.732 START TEST nvmf_bdevperf 00:29:14.732 ************************************ 00:29:14.732 21:32:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:29:14.732 * Looking for test storage... 00:29:14.732 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:14.732 21:32:49 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.732 21:32:49 -- nvmf/common.sh@7 -- # uname -s 00:29:14.732 21:32:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.732 21:32:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.732 21:32:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.732 21:32:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.732 21:32:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.732 21:32:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.732 21:32:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.732 21:32:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.732 21:32:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.732 21:32:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.732 21:32:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:14.732 21:32:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:14.732 21:32:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.732 21:32:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.732 21:32:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.732 21:32:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:14.732 21:32:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.732 21:32:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.732 21:32:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.732 21:32:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.732 21:32:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.732 21:32:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.732 21:32:49 -- paths/export.sh@5 -- # export PATH 00:29:14.732 21:32:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.732 21:32:49 -- nvmf/common.sh@46 -- # : 0 00:29:14.732 21:32:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:14.732 21:32:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:14.732 21:32:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:14.732 21:32:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.732 21:32:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.732 21:32:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:14.732 21:32:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:14.732 21:32:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:14.732 21:32:49 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.732 21:32:49 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.732 21:32:49 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:14.732 21:32:49 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:14.732 21:32:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.732 21:32:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:14.732 21:32:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:14.732 21:32:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:14.732 21:32:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.732 21:32:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.732 21:32:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.732 21:32:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:14.732 21:32:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:14.732 21:32:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:14.732 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:29:22.858 21:32:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:22.858 21:32:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:22.858 21:32:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:22.858 21:32:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:22.858 21:32:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:22.858 21:32:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:22.858 21:32:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:22.858 21:32:57 -- nvmf/common.sh@294 -- # net_devs=() 00:29:22.858 21:32:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:22.859 21:32:57 -- nvmf/common.sh@295 -- # e810=() 00:29:22.859 21:32:57 -- nvmf/common.sh@295 -- # local -ga e810 00:29:22.859 21:32:57 -- nvmf/common.sh@296 -- # x722=() 00:29:22.859 21:32:57 -- nvmf/common.sh@296 -- # local -ga x722 00:29:22.859 21:32:57 -- nvmf/common.sh@297 -- # mlx=() 00:29:22.859 21:32:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:22.859 21:32:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.859 21:32:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:22.859 21:32:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:22.859 21:32:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:22.859 21:32:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:22.859 21:32:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:22.859 21:32:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:22.859 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:22.859 21:32:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:22.859 21:32:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:22.859 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:22.859 21:32:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:22.859 21:32:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:22.859 21:32:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.859 21:32:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:22.859 21:32:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.859 21:32:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:22.859 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.859 21:32:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.859 21:32:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:22.859 21:32:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.859 21:32:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:22.859 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:22.859 21:32:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.859 21:32:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:22.859 21:32:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:22.859 21:32:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:22.859 21:32:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:22.859 21:32:57 -- nvmf/common.sh@57 -- # uname 00:29:22.859 21:32:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:22.859 21:32:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:22.859 21:32:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:22.859 21:32:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:22.859 21:32:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:22.859 21:32:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:22.859 21:32:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:22.859 21:32:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:22.859 21:32:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:22.859 21:32:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:22.859 21:32:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:22.859 21:32:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:22.859 21:32:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:22.859 21:32:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:22.859 21:32:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:22.859 21:32:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:22.859 21:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@104 -- # continue 2 00:29:22.859 21:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:22.859 21:32:57 -- nvmf/common.sh@104 -- # continue 2 00:29:22.859 21:32:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:22.859 21:32:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:22.859 21:32:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:22.859 21:32:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:22.859 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:22.859 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:22.859 altname enp217s0f0np0 00:29:22.859 altname ens818f0np0 00:29:22.859 inet 192.168.100.8/24 scope global mlx_0_0 00:29:22.859 valid_lft forever preferred_lft forever 00:29:22.859 21:32:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:22.859 21:32:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:22.859 21:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:22.859 21:32:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:22.859 21:32:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:22.859 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:22.859 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:22.859 altname enp217s0f1np1 00:29:22.859 altname ens818f1np1 00:29:22.859 inet 192.168.100.9/24 scope global mlx_0_1 00:29:22.859 valid_lft forever preferred_lft forever 00:29:22.859 21:32:57 -- nvmf/common.sh@410 -- # return 0 00:29:22.859 21:32:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:22.859 21:32:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:22.859 21:32:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:22.859 21:32:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:22.859 21:32:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:22.859 21:32:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:22.859 21:32:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:22.859 21:32:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:22.859 21:32:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:22.859 21:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@104 -- # continue 2 00:29:22.859 21:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:22.859 21:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:22.859 21:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:22.859 21:32:57 -- nvmf/common.sh@104 -- # continue 2 00:29:22.859 21:32:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:22.859 21:32:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:22.859 21:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:22.859 21:32:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:22.859 21:32:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:22.859 21:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:22.860 21:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:22.860 21:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:22.860 21:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:22.860 21:32:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:22.860 192.168.100.9' 00:29:22.860 21:32:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:22.860 192.168.100.9' 00:29:22.860 21:32:57 -- nvmf/common.sh@445 -- # head -n 1 00:29:22.860 21:32:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:22.860 21:32:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:22.860 192.168.100.9' 00:29:22.860 21:32:57 -- nvmf/common.sh@446 -- # head -n 1 00:29:22.860 21:32:57 -- nvmf/common.sh@446 -- # tail -n +2 00:29:22.860 21:32:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:22.860 21:32:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:22.860 21:32:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:22.860 21:32:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:22.860 21:32:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:22.860 21:32:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:22.860 21:32:57 -- host/bdevperf.sh@25 -- # tgt_init 00:29:22.860 21:32:57 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:22.860 21:32:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:22.860 21:32:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:22.860 21:32:57 -- common/autotest_common.sh@10 -- # set +x 00:29:22.860 21:32:57 -- nvmf/common.sh@469 -- # nvmfpid=1844867 00:29:22.860 21:32:57 -- nvmf/common.sh@470 -- # waitforlisten 1844867 00:29:22.860 21:32:57 -- common/autotest_common.sh@819 -- # '[' -z 1844867 ']' 00:29:22.860 21:32:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.860 21:32:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:22.860 21:32:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.860 21:32:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:22.860 21:32:57 -- common/autotest_common.sh@10 -- # set +x 00:29:22.860 21:32:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.860 [2024-07-26 21:32:57.694759] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:22.860 [2024-07-26 21:32:57.694816] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.119 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.119 [2024-07-26 21:32:57.785234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:23.119 [2024-07-26 21:32:57.822614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:23.119 [2024-07-26 21:32:57.822732] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.119 [2024-07-26 21:32:57.822742] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.119 [2024-07-26 21:32:57.822751] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.119 [2024-07-26 21:32:57.822793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.119 [2024-07-26 21:32:57.822880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.119 [2024-07-26 21:32:57.822882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.687 21:32:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:23.687 21:32:58 -- common/autotest_common.sh@852 -- # return 0 00:29:23.687 21:32:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:23.687 21:32:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:23.687 21:32:58 -- common/autotest_common.sh@10 -- # set +x 00:29:23.687 21:32:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.687 21:32:58 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:23.687 21:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.687 21:32:58 -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 [2024-07-26 21:32:58.564511] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2175860/0x2179d50) succeed. 00:29:23.947 [2024-07-26 21:32:58.574923] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2176db0/0x21bb3e0) succeed. 00:29:23.947 21:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.947 21:32:58 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:23.947 21:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.947 21:32:58 -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 Malloc0 00:29:23.947 21:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.947 21:32:58 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.947 21:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.947 21:32:58 -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 21:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.947 21:32:58 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.947 21:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.947 21:32:58 -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 21:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.947 21:32:58 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:23.947 21:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.947 21:32:58 -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 [2024-07-26 21:32:58.721851] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:23.947 21:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.947 21:32:58 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:23.947 21:32:58 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:23.947 21:32:58 -- nvmf/common.sh@520 -- # config=() 00:29:23.947 21:32:58 -- nvmf/common.sh@520 -- # local subsystem config 00:29:23.947 21:32:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:23.947 21:32:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:23.947 { 00:29:23.947 "params": { 00:29:23.947 "name": "Nvme$subsystem", 00:29:23.947 "trtype": "$TEST_TRANSPORT", 00:29:23.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.947 "adrfam": "ipv4", 00:29:23.947 "trsvcid": "$NVMF_PORT", 00:29:23.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.947 "hdgst": ${hdgst:-false}, 00:29:23.947 "ddgst": ${ddgst:-false} 00:29:23.947 }, 00:29:23.947 "method": "bdev_nvme_attach_controller" 00:29:23.947 } 00:29:23.947 EOF 00:29:23.947 )") 00:29:23.947 21:32:58 -- nvmf/common.sh@542 -- # cat 00:29:23.947 21:32:58 -- nvmf/common.sh@544 -- # jq . 00:29:23.947 21:32:58 -- nvmf/common.sh@545 -- # IFS=, 00:29:23.947 21:32:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:23.947 "params": { 00:29:23.947 "name": "Nvme1", 00:29:23.947 "trtype": "rdma", 00:29:23.947 "traddr": "192.168.100.8", 00:29:23.947 "adrfam": "ipv4", 00:29:23.947 "trsvcid": "4420", 00:29:23.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.947 "hdgst": false, 00:29:23.947 "ddgst": false 00:29:23.947 }, 00:29:23.947 "method": "bdev_nvme_attach_controller" 00:29:23.947 }' 00:29:23.947 [2024-07-26 21:32:58.771380] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:23.947 [2024-07-26 21:32:58.771435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845063 ] 00:29:23.947 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.207 [2024-07-26 21:32:58.858157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.207 [2024-07-26 21:32:58.894736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.207 Running I/O for 1 seconds... 00:29:25.586 00:29:25.586 Latency(us) 00:29:25.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:25.586 Verification LBA range: start 0x0 length 0x4000 00:29:25.586 Nvme1n1 : 1.00 24990.02 97.62 0.00 0.00 5098.11 1074.79 12111.05 00:29:25.586 =================================================================================================================== 00:29:25.586 Total : 24990.02 97.62 0.00 0.00 5098.11 1074.79 12111.05 00:29:25.586 21:33:00 -- host/bdevperf.sh@30 -- # bdevperfpid=1845340 00:29:25.586 21:33:00 -- host/bdevperf.sh@32 -- # sleep 3 00:29:25.586 21:33:00 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:25.586 21:33:00 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:25.586 21:33:00 -- nvmf/common.sh@520 -- # config=() 00:29:25.586 21:33:00 -- nvmf/common.sh@520 -- # local subsystem config 00:29:25.586 21:33:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:25.586 21:33:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:25.586 { 00:29:25.586 "params": { 00:29:25.586 "name": "Nvme$subsystem", 00:29:25.586 "trtype": "$TEST_TRANSPORT", 00:29:25.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.586 "adrfam": "ipv4", 00:29:25.586 "trsvcid": "$NVMF_PORT", 00:29:25.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.586 "hdgst": ${hdgst:-false}, 00:29:25.586 "ddgst": ${ddgst:-false} 00:29:25.586 }, 00:29:25.586 "method": "bdev_nvme_attach_controller" 00:29:25.586 } 00:29:25.586 EOF 00:29:25.586 )") 00:29:25.586 21:33:00 -- nvmf/common.sh@542 -- # cat 00:29:25.586 21:33:00 -- nvmf/common.sh@544 -- # jq . 00:29:25.586 21:33:00 -- nvmf/common.sh@545 -- # IFS=, 00:29:25.586 21:33:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:25.586 "params": { 00:29:25.586 "name": "Nvme1", 00:29:25.586 "trtype": "rdma", 00:29:25.586 "traddr": "192.168.100.8", 00:29:25.586 "adrfam": "ipv4", 00:29:25.586 "trsvcid": "4420", 00:29:25.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.586 "hdgst": false, 00:29:25.586 "ddgst": false 00:29:25.586 }, 00:29:25.586 "method": "bdev_nvme_attach_controller" 00:29:25.586 }' 00:29:25.586 [2024-07-26 21:33:00.311150] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:25.586 [2024-07-26 21:33:00.311206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845340 ] 00:29:25.586 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.586 [2024-07-26 21:33:00.397628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.586 [2024-07-26 21:33:00.431131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.845 Running I/O for 15 seconds... 00:29:29.132 21:33:03 -- host/bdevperf.sh@33 -- # kill -9 1844867 00:29:29.132 21:33:03 -- host/bdevperf.sh@35 -- # sleep 3 00:29:29.703 [2024-07-26 21:33:04.305044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182900 00:29:29.703 [2024-07-26 21:33:04.305655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x183700 00:29:29.703 [2024-07-26 21:33:04.305694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.703 [2024-07-26 21:33:04.305751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.703 [2024-07-26 21:33:04.305761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.305771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.305790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.305811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.305829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.305848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.305866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.305885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.305906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.305929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.305950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.305972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.305983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.305993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.306073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.306149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.306248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.306270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.306364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.306402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182900 00:29:29.704 [2024-07-26 21:33:04.306439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x183700 00:29:29.704 [2024-07-26 21:33:04.306460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.704 [2024-07-26 21:33:04.306470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.704 [2024-07-26 21:33:04.306479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.306497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.306594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.306613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.306695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.306817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.306898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.306918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.306940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.306983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.306995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.307005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.307030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.307053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.307075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.307098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.307119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.307143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.307165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.307186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.705 [2024-07-26 21:33:04.307208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.307229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x183700 00:29:29.705 [2024-07-26 21:33:04.307248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.705 [2024-07-26 21:33:04.307261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182900 00:29:29.705 [2024-07-26 21:33:04.307271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182900 00:29:29.706 [2024-07-26 21:33:04.307291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x183700 00:29:29.706 [2024-07-26 21:33:04.307312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x183700 00:29:29.706 [2024-07-26 21:33:04.307335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182900 00:29:29.706 [2024-07-26 21:33:04.307376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x183700 00:29:29.706 [2024-07-26 21:33:04.307438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x183700 00:29:29.706 [2024-07-26 21:33:04.307500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x183700 00:29:29.706 [2024-07-26 21:33:04.307520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182900 00:29:29.706 [2024-07-26 21:33:04.307581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.706 [2024-07-26 21:33:04.307622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x183700 00:29:29.706 [2024-07-26 21:33:04.307648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.307660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182900 00:29:29.706 [2024-07-26 21:33:04.307670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:25905 cdw0:3db6000 sqhd:a1d2 p:1 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.310311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:29.706 [2024-07-26 21:33:04.310373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:29.706 [2024-07-26 21:33:04.310407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11280 len:8 PRP1 0x0 PRP2 0x0 00:29:29.706 [2024-07-26 21:33:04.310444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.706 [2024-07-26 21:33:04.310542] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:29:29.706 [2024-07-26 21:33:04.312960] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.706 [2024-07-26 21:33:04.329616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:29.706 [2024-07-26 21:33:04.331934] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:29.706 [2024-07-26 21:33:04.331955] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:29.706 [2024-07-26 21:33:04.331964] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:29:30.644 [2024-07-26 21:33:05.336324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:30.644 [2024-07-26 21:33:05.336355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.644 [2024-07-26 21:33:05.336487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.644 [2024-07-26 21:33:05.336500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.644 [2024-07-26 21:33:05.336510] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:30.644 [2024-07-26 21:33:05.337249] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:30.644 [2024-07-26 21:33:05.338300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.644 [2024-07-26 21:33:05.349059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.644 [2024-07-26 21:33:05.351788] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:30.644 [2024-07-26 21:33:05.351846] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:30.644 [2024-07-26 21:33:05.351874] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:29:31.579 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1844867 Killed "${NVMF_APP[@]}" "$@" 00:29:31.579 21:33:06 -- host/bdevperf.sh@36 -- # tgt_init 00:29:31.579 21:33:06 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:31.579 21:33:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:31.579 21:33:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:31.579 21:33:06 -- common/autotest_common.sh@10 -- # set +x 00:29:31.579 21:33:06 -- nvmf/common.sh@469 -- # nvmfpid=1846804 00:29:31.579 21:33:06 -- nvmf/common.sh@470 -- # waitforlisten 1846804 00:29:31.579 21:33:06 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:31.579 21:33:06 -- common/autotest_common.sh@819 -- # '[' -z 1846804 ']' 00:29:31.579 21:33:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.579 21:33:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:31.579 21:33:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.579 21:33:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:31.579 21:33:06 -- common/autotest_common.sh@10 -- # set +x 00:29:31.579 [2024-07-26 21:33:06.333076] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:31.579 [2024-07-26 21:33:06.333123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.579 [2024-07-26 21:33:06.355838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:31.579 [2024-07-26 21:33:06.355862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.579 [2024-07-26 21:33:06.355992] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.579 [2024-07-26 21:33:06.356005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.579 [2024-07-26 21:33:06.356015] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:31.579 [2024-07-26 21:33:06.356346] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:31.579 [2024-07-26 21:33:06.357619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.579 [2024-07-26 21:33:06.368245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.579 [2024-07-26 21:33:06.370216] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:31.579 [2024-07-26 21:33:06.370237] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:31.579 [2024-07-26 21:33:06.370246] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:29:31.579 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.579 [2024-07-26 21:33:06.421145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:31.838 [2024-07-26 21:33:06.458194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:31.838 [2024-07-26 21:33:06.458300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.838 [2024-07-26 21:33:06.458311] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.838 [2024-07-26 21:33:06.458320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.838 [2024-07-26 21:33:06.458362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.838 [2024-07-26 21:33:06.458445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.838 [2024-07-26 21:33:06.458446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.406 21:33:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:32.406 21:33:07 -- common/autotest_common.sh@852 -- # return 0 00:29:32.406 21:33:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:32.406 21:33:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:32.406 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:29:32.406 21:33:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.406 21:33:07 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:32.406 21:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.406 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:29:32.406 [2024-07-26 21:33:07.205118] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1137860/0x113bd50) succeed. 00:29:32.406 [2024-07-26 21:33:07.215411] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1138db0/0x117d3e0) succeed. 00:29:32.664 21:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.664 21:33:07 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:32.664 21:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.664 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:29:32.664 Malloc0 00:29:32.664 21:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.664 21:33:07 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.664 21:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.664 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:29:32.664 21:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.664 21:33:07 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.664 21:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.665 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:29:32.665 21:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.665 21:33:07 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:32.665 21:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.665 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:29:32.665 [2024-07-26 21:33:07.359886] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:32.665 21:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.665 21:33:07 -- host/bdevperf.sh@38 -- # wait 1845340 00:29:32.665 [2024-07-26 21:33:07.374061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:32.665 [2024-07-26 21:33:07.374092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.665 [2024-07-26 21:33:07.374210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.665 [2024-07-26 21:33:07.374222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.665 [2024-07-26 21:33:07.374233] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:32.665 [2024-07-26 21:33:07.375895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.665 [2024-07-26 21:33:07.378139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.665 [2024-07-26 21:33:07.411570] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:40.780 00:29:40.780 Latency(us) 00:29:40.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.780 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.781 Verification LBA range: start 0x0 length 0x4000 00:29:40.781 Nvme1n1 : 15.00 18414.92 71.93 16291.04 0.00 3677.01 491.52 1040187.39 00:29:40.781 =================================================================================================================== 00:29:40.781 Total : 18414.92 71.93 16291.04 0.00 3677.01 491.52 1040187.39 00:29:41.040 21:33:15 -- host/bdevperf.sh@39 -- # sync 00:29:41.040 21:33:15 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.040 21:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.040 21:33:15 -- common/autotest_common.sh@10 -- # set +x 00:29:41.040 21:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.040 21:33:15 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:41.040 21:33:15 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:41.040 21:33:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:41.040 21:33:15 -- nvmf/common.sh@116 -- # sync 00:29:41.040 21:33:15 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:41.040 21:33:15 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:41.040 21:33:15 -- nvmf/common.sh@119 -- # set +e 00:29:41.040 21:33:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:41.040 21:33:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:41.040 rmmod nvme_rdma 00:29:41.040 rmmod nvme_fabrics 00:29:41.040 21:33:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:41.040 21:33:15 -- nvmf/common.sh@123 -- # set -e 00:29:41.040 21:33:15 -- nvmf/common.sh@124 -- # return 0 00:29:41.040 21:33:15 -- nvmf/common.sh@477 -- # '[' -n 1846804 ']' 00:29:41.303 21:33:15 -- nvmf/common.sh@478 -- # killprocess 1846804 00:29:41.303 21:33:15 -- common/autotest_common.sh@926 -- # '[' -z 1846804 ']' 00:29:41.303 21:33:15 -- common/autotest_common.sh@930 -- # kill -0 1846804 00:29:41.303 21:33:15 -- common/autotest_common.sh@931 -- # uname 00:29:41.303 21:33:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:41.303 21:33:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1846804 00:29:41.303 21:33:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:41.303 21:33:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:41.303 21:33:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1846804' 00:29:41.303 killing process with pid 1846804 00:29:41.303 21:33:15 -- common/autotest_common.sh@945 -- # kill 1846804 00:29:41.303 21:33:15 -- common/autotest_common.sh@950 -- # wait 1846804 00:29:41.598 21:33:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:41.599 21:33:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:41.599 00:29:41.599 real 0m26.801s 00:29:41.599 user 1m4.598s 00:29:41.599 sys 0m7.452s 00:29:41.599 21:33:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.599 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:29:41.599 ************************************ 00:29:41.599 END TEST nvmf_bdevperf 00:29:41.599 ************************************ 00:29:41.599 21:33:16 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:41.599 21:33:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:41.599 21:33:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:41.599 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:29:41.599 ************************************ 00:29:41.599 START TEST nvmf_target_disconnect 00:29:41.599 ************************************ 00:29:41.599 21:33:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:41.599 * Looking for test storage... 00:29:41.599 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:41.599 21:33:16 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.599 21:33:16 -- nvmf/common.sh@7 -- # uname -s 00:29:41.599 21:33:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.599 21:33:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.599 21:33:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.599 21:33:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.599 21:33:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.599 21:33:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.599 21:33:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.599 21:33:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.599 21:33:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.599 21:33:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.599 21:33:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:41.599 21:33:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:41.599 21:33:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.599 21:33:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.599 21:33:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.599 21:33:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:41.599 21:33:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.599 21:33:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.599 21:33:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.599 21:33:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.599 21:33:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.599 21:33:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.599 21:33:16 -- paths/export.sh@5 -- # export PATH 00:29:41.599 21:33:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.599 21:33:16 -- nvmf/common.sh@46 -- # : 0 00:29:41.599 21:33:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:41.599 21:33:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:41.599 21:33:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:41.599 21:33:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.599 21:33:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.599 21:33:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:41.599 21:33:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:41.599 21:33:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:41.599 21:33:16 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:41.599 21:33:16 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:41.599 21:33:16 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:41.599 21:33:16 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:41.599 21:33:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:41.599 21:33:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.599 21:33:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:41.599 21:33:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:41.599 21:33:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:41.599 21:33:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.599 21:33:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.599 21:33:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.599 21:33:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:41.599 21:33:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:41.599 21:33:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:41.599 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:29:49.715 21:33:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:49.715 21:33:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:49.715 21:33:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:49.715 21:33:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:49.715 21:33:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:49.715 21:33:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:49.715 21:33:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:49.715 21:33:24 -- nvmf/common.sh@294 -- # net_devs=() 00:29:49.715 21:33:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:49.715 21:33:24 -- nvmf/common.sh@295 -- # e810=() 00:29:49.715 21:33:24 -- nvmf/common.sh@295 -- # local -ga e810 00:29:49.715 21:33:24 -- nvmf/common.sh@296 -- # x722=() 00:29:49.715 21:33:24 -- nvmf/common.sh@296 -- # local -ga x722 00:29:49.715 21:33:24 -- nvmf/common.sh@297 -- # mlx=() 00:29:49.715 21:33:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:49.715 21:33:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.715 21:33:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:49.715 21:33:24 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:49.715 21:33:24 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:49.715 21:33:24 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:49.715 21:33:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:49.715 21:33:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:49.715 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:49.715 21:33:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.715 21:33:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:49.715 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:49.715 21:33:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.715 21:33:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:49.715 21:33:24 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.715 21:33:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:49.715 21:33:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.715 21:33:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:49.715 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:49.715 21:33:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.715 21:33:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.715 21:33:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:49.715 21:33:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.715 21:33:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:49.715 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:49.715 21:33:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.715 21:33:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:49.715 21:33:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:49.715 21:33:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:49.715 21:33:24 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:49.715 21:33:24 -- nvmf/common.sh@57 -- # uname 00:29:49.715 21:33:24 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:49.715 21:33:24 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:49.715 21:33:24 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:49.715 21:33:24 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:49.715 21:33:24 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:49.715 21:33:24 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:49.715 21:33:24 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:49.715 21:33:24 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:49.715 21:33:24 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:49.715 21:33:24 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:49.715 21:33:24 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:49.715 21:33:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.715 21:33:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:49.715 21:33:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:49.715 21:33:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.715 21:33:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:49.715 21:33:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:49.715 21:33:24 -- nvmf/common.sh@104 -- # continue 2 00:29:49.715 21:33:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.715 21:33:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:49.715 21:33:24 -- nvmf/common.sh@104 -- # continue 2 00:29:49.715 21:33:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:49.715 21:33:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:49.715 21:33:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:49.715 21:33:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:49.715 21:33:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.715 21:33:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.715 21:33:24 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:49.715 21:33:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:49.715 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.715 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:49.715 altname enp217s0f0np0 00:29:49.715 altname ens818f0np0 00:29:49.715 inet 192.168.100.8/24 scope global mlx_0_0 00:29:49.715 valid_lft forever preferred_lft forever 00:29:49.715 21:33:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:49.715 21:33:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:49.715 21:33:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:49.715 21:33:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:49.715 21:33:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.715 21:33:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.715 21:33:24 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:49.715 21:33:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:49.715 21:33:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:49.715 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.715 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:49.715 altname enp217s0f1np1 00:29:49.715 altname ens818f1np1 00:29:49.716 inet 192.168.100.9/24 scope global mlx_0_1 00:29:49.716 valid_lft forever preferred_lft forever 00:29:49.716 21:33:24 -- nvmf/common.sh@410 -- # return 0 00:29:49.716 21:33:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:49.716 21:33:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:49.716 21:33:24 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:49.716 21:33:24 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:49.716 21:33:24 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:49.716 21:33:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.716 21:33:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:49.716 21:33:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:49.716 21:33:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.716 21:33:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:49.716 21:33:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.716 21:33:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.716 21:33:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.716 21:33:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:49.716 21:33:24 -- nvmf/common.sh@104 -- # continue 2 00:29:49.716 21:33:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.716 21:33:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.716 21:33:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.716 21:33:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.716 21:33:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.716 21:33:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:49.716 21:33:24 -- nvmf/common.sh@104 -- # continue 2 00:29:49.716 21:33:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:49.716 21:33:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:49.716 21:33:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:49.716 21:33:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:49.716 21:33:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.716 21:33:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.716 21:33:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:49.716 21:33:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:49.716 21:33:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:49.716 21:33:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:49.716 21:33:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.716 21:33:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.974 21:33:24 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:49.974 192.168.100.9' 00:29:49.974 21:33:24 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:49.974 192.168.100.9' 00:29:49.974 21:33:24 -- nvmf/common.sh@445 -- # head -n 1 00:29:49.974 21:33:24 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:49.974 21:33:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:49.974 192.168.100.9' 00:29:49.974 21:33:24 -- nvmf/common.sh@446 -- # tail -n +2 00:29:49.974 21:33:24 -- nvmf/common.sh@446 -- # head -n 1 00:29:49.974 21:33:24 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:49.974 21:33:24 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:49.974 21:33:24 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:49.974 21:33:24 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:49.974 21:33:24 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:49.974 21:33:24 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:49.974 21:33:24 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:49.974 21:33:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:49.974 21:33:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:49.974 21:33:24 -- common/autotest_common.sh@10 -- # set +x 00:29:49.974 ************************************ 00:29:49.974 START TEST nvmf_target_disconnect_tc1 00:29:49.974 ************************************ 00:29:49.974 21:33:24 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:29:49.974 21:33:24 -- host/target_disconnect.sh@32 -- # set +e 00:29:49.974 21:33:24 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:49.974 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.974 [2024-07-26 21:33:24.778691] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:49.974 [2024-07-26 21:33:24.778736] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:49.974 [2024-07-26 21:33:24.778753] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:29:51.346 [2024-07-26 21:33:25.782688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:51.346 [2024-07-26 21:33:25.782751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:51.346 [2024-07-26 21:33:25.782762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:29:51.346 [2024-07-26 21:33:25.782785] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:51.346 [2024-07-26 21:33:25.782795] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:51.346 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:29:51.346 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:51.346 Initializing NVMe Controllers 00:29:51.346 21:33:25 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:51.346 21:33:25 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:51.346 21:33:25 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:29:51.346 21:33:25 -- common/autotest_common.sh@1132 -- # return 0 00:29:51.346 21:33:25 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:51.346 21:33:25 -- host/target_disconnect.sh@41 -- # set -e 00:29:51.347 00:29:51.347 real 0m1.145s 00:29:51.347 user 0m0.853s 00:29:51.347 sys 0m0.281s 00:29:51.347 21:33:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.347 21:33:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.347 ************************************ 00:29:51.347 END TEST nvmf_target_disconnect_tc1 00:29:51.347 ************************************ 00:29:51.347 21:33:25 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:51.347 21:33:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:51.347 21:33:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.347 21:33:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.347 ************************************ 00:29:51.347 START TEST nvmf_target_disconnect_tc2 00:29:51.347 ************************************ 00:29:51.347 21:33:25 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:29:51.347 21:33:25 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:29:51.347 21:33:25 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:51.347 21:33:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:51.347 21:33:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:51.347 21:33:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.347 21:33:25 -- nvmf/common.sh@469 -- # nvmfpid=1852767 00:29:51.347 21:33:25 -- nvmf/common.sh@470 -- # waitforlisten 1852767 00:29:51.347 21:33:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:51.347 21:33:25 -- common/autotest_common.sh@819 -- # '[' -z 1852767 ']' 00:29:51.347 21:33:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.347 21:33:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:51.347 21:33:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.347 21:33:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:51.347 21:33:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.347 [2024-07-26 21:33:25.900497] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:51.347 [2024-07-26 21:33:25.900553] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.347 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.347 [2024-07-26 21:33:26.002017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.347 [2024-07-26 21:33:26.039448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:51.347 [2024-07-26 21:33:26.039559] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.347 [2024-07-26 21:33:26.039569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.347 [2024-07-26 21:33:26.039579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.347 [2024-07-26 21:33:26.039701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:51.347 [2024-07-26 21:33:26.039813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:51.347 [2024-07-26 21:33:26.039922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.347 [2024-07-26 21:33:26.039924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:51.914 21:33:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:51.914 21:33:26 -- common/autotest_common.sh@852 -- # return 0 00:29:51.914 21:33:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:51.914 21:33:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:51.914 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:29:51.914 21:33:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.914 21:33:26 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.914 21:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.914 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:29:51.914 Malloc0 00:29:51.914 21:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.914 21:33:26 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:51.914 21:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.914 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:29:51.914 [2024-07-26 21:33:26.779834] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ff2820/0x1ffe440) succeed. 00:29:52.173 [2024-07-26 21:33:26.790570] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ff3e10/0x207e480) succeed. 00:29:52.173 21:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.173 21:33:26 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.173 21:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.173 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:29:52.173 21:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.173 21:33:26 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.173 21:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.173 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:29:52.173 21:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.173 21:33:26 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:52.173 21:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.173 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:29:52.173 [2024-07-26 21:33:26.930076] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:52.173 21:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.173 21:33:26 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:52.173 21:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.173 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:29:52.173 21:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.173 21:33:26 -- host/target_disconnect.sh@50 -- # reconnectpid=1852935 00:29:52.173 21:33:26 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:52.173 21:33:26 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:52.173 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.706 21:33:28 -- host/target_disconnect.sh@53 -- # kill -9 1852767 00:29:54.706 21:33:28 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:55.274 Write completed with error (sct=0, sc=8) 00:29:55.274 starting I/O failed 00:29:55.274 Read completed with error (sct=0, sc=8) 00:29:55.274 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Read completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 Write completed with error (sct=0, sc=8) 00:29:55.275 starting I/O failed 00:29:55.275 [2024-07-26 21:33:30.134353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.213 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1852767 Killed "${NVMF_APP[@]}" "$@" 00:29:56.213 21:33:30 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:29:56.213 21:33:30 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:56.213 21:33:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:56.213 21:33:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:56.213 21:33:30 -- common/autotest_common.sh@10 -- # set +x 00:29:56.213 21:33:30 -- nvmf/common.sh@469 -- # nvmfpid=1853717 00:29:56.213 21:33:30 -- nvmf/common.sh@470 -- # waitforlisten 1853717 00:29:56.213 21:33:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:56.213 21:33:30 -- common/autotest_common.sh@819 -- # '[' -z 1853717 ']' 00:29:56.213 21:33:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.213 21:33:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:56.213 21:33:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.213 21:33:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:56.213 21:33:30 -- common/autotest_common.sh@10 -- # set +x 00:29:56.213 [2024-07-26 21:33:31.006063] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:56.213 [2024-07-26 21:33:31.006116] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.213 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.472 [2024-07-26 21:33:31.109910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Read completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 Write completed with error (sct=0, sc=8) 00:29:56.472 starting I/O failed 00:29:56.472 [2024-07-26 21:33:31.139672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.472 [2024-07-26 21:33:31.148478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:56.472 [2024-07-26 21:33:31.148581] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.472 [2024-07-26 21:33:31.148591] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.472 [2024-07-26 21:33:31.148602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.472 [2024-07-26 21:33:31.148726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:56.472 [2024-07-26 21:33:31.148835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:56.472 [2024-07-26 21:33:31.148919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:56.472 [2024-07-26 21:33:31.148921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.050 21:33:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:57.050 21:33:31 -- common/autotest_common.sh@852 -- # return 0 00:29:57.050 21:33:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:57.050 21:33:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:57.050 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:29:57.050 21:33:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.050 21:33:31 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:57.050 21:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.050 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:29:57.050 Malloc0 00:29:57.050 21:33:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.050 21:33:31 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:57.050 21:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.050 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:29:57.050 [2024-07-26 21:33:31.889066] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x132c820/0x1338440) succeed. 00:29:57.050 [2024-07-26 21:33:31.899802] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x132de10/0x13b8480) succeed. 00:29:57.309 21:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.309 21:33:32 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.309 21:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.309 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:29:57.309 21:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.309 21:33:32 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:57.309 21:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.309 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:29:57.309 21:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.309 21:33:32 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:57.309 21:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.309 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:29:57.309 [2024-07-26 21:33:32.043343] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:57.309 21:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.309 21:33:32 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:57.309 21:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.309 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:29:57.309 21:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.309 21:33:32 -- host/target_disconnect.sh@58 -- # wait 1852935 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Write completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 Read completed with error (sct=0, sc=8) 00:29:57.309 starting I/O failed 00:29:57.309 [2024-07-26 21:33:32.144703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.309 [2024-07-26 21:33:32.157651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.309 [2024-07-26 21:33:32.157705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.309 [2024-07-26 21:33:32.157730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.309 [2024-07-26 21:33:32.157740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.309 [2024-07-26 21:33:32.157758] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.309 [2024-07-26 21:33:32.167958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.309 qpair failed and we were unable to recover it. 00:29:57.309 [2024-07-26 21:33:32.177516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.309 [2024-07-26 21:33:32.177565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.309 [2024-07-26 21:33:32.177586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.309 [2024-07-26 21:33:32.177596] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.309 [2024-07-26 21:33:32.177606] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.568 [2024-07-26 21:33:32.187703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-07-26 21:33:32.197529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.568 [2024-07-26 21:33:32.197574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.568 [2024-07-26 21:33:32.197591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.197601] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.197613] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.207977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.217558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.217602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.217623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.217641] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.217650] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.227928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.237725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.237773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.237791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.237801] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.237811] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.247897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.257733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.257775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.257793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.257803] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.257812] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.268222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.277832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.277872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.277889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.277898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.277907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.288193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.297767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.297809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.297826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.297835] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.297844] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.308286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.317944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.317992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.318009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.318018] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.318027] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.328186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.337947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.337990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.338009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.338019] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.338027] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.348225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.358087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.358133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.358150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.358159] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.358168] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.368438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.378067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.378109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.378130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.378140] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.378149] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.388372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.398032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.398078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.398096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.398106] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.398115] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.408307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-07-26 21:33:32.418028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.569 [2024-07-26 21:33:32.418068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.569 [2024-07-26 21:33:32.418086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.569 [2024-07-26 21:33:32.418096] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.569 [2024-07-26 21:33:32.418105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.569 [2024-07-26 21:33:32.428404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.438179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.438215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.438233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.438242] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.438251] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.448420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.458280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.458321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.458339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.458348] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.458357] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.468720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.478304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.478342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.478359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.478368] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.478377] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.488773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.498371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.498410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.498428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.498437] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.498446] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.508622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.518435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.518471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.518487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.518497] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.518506] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.528844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.538469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.538513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.538531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.538541] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.538550] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.548829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.558521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.558559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.558579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.558589] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.558597] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.568830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.578734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.578778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.578795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.578805] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.578814] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.588688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.598837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.598876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.598893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.598903] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.598912] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.609070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.618680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.618718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.618735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.618744] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.618753] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.628957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.638818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.638857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.638875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.638884] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.638896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.649273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.658809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.658854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.658872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.658881] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.658890] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.669182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:57.828 [2024-07-26 21:33:32.678997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.828 [2024-07-26 21:33:32.679037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.828 [2024-07-26 21:33:32.679053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.828 [2024-07-26 21:33:32.679063] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.828 [2024-07-26 21:33:32.679072] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:57.828 [2024-07-26 21:33:32.689491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.828 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-26 21:33:32.698999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.087 [2024-07-26 21:33:32.699041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.087 [2024-07-26 21:33:32.699057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.087 [2024-07-26 21:33:32.699067] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.087 [2024-07-26 21:33:32.699075] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.087 [2024-07-26 21:33:32.709416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-26 21:33:32.719162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.087 [2024-07-26 21:33:32.719209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.087 [2024-07-26 21:33:32.719225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.087 [2024-07-26 21:33:32.719234] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.087 [2024-07-26 21:33:32.719243] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.087 [2024-07-26 21:33:32.729411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-26 21:33:32.739095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.087 [2024-07-26 21:33:32.739137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.087 [2024-07-26 21:33:32.739153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.087 [2024-07-26 21:33:32.739164] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.087 [2024-07-26 21:33:32.739172] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.087 [2024-07-26 21:33:32.749653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-26 21:33:32.759231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.087 [2024-07-26 21:33:32.759276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.087 [2024-07-26 21:33:32.759292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.087 [2024-07-26 21:33:32.759302] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.087 [2024-07-26 21:33:32.759311] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.087 [2024-07-26 21:33:32.769481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-26 21:33:32.779265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.087 [2024-07-26 21:33:32.779306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.087 [2024-07-26 21:33:32.779322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.779332] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.779341] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.789662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.799323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.799365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.799382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.799391] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.799401] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.809836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.819331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.819368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.819384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.819397] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.819406] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.829752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.839405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.839448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.839465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.839475] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.839483] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.849841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.859589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.859632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.859649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.859658] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.859667] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.869966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.879623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.879669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.879686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.879695] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.879704] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.890061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.899632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.899677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.899694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.899703] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.899712] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.909903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.919614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.919657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.919674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.919684] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.919692] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.929978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-26 21:33:32.939508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.088 [2024-07-26 21:33:32.939549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.088 [2024-07-26 21:33:32.939565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.088 [2024-07-26 21:33:32.939575] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.088 [2024-07-26 21:33:32.939583] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.088 [2024-07-26 21:33:32.949963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.347 [2024-07-26 21:33:32.959741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.347 [2024-07-26 21:33:32.959783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.347 [2024-07-26 21:33:32.959799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.347 [2024-07-26 21:33:32.959809] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.347 [2024-07-26 21:33:32.959817] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.347 [2024-07-26 21:33:32.970118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.347 qpair failed and we were unable to recover it. 00:29:58.347 [2024-07-26 21:33:32.979793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.347 [2024-07-26 21:33:32.979829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.347 [2024-07-26 21:33:32.979845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.347 [2024-07-26 21:33:32.979855] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.347 [2024-07-26 21:33:32.979863] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:32.990010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:32.999797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:32.999832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:32.999852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:32.999861] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:32.999870] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.010279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.019908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.019947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.019964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.019973] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.019982] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.030527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.040028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.040074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.040090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.040100] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.040109] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.050599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.060163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.060204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.060221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.060230] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.060239] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.070663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.080106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.080147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.080163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.080173] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.080184] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.090674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.100242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.100282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.100298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.100307] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.100315] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.110673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.120298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.120340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.120356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.120365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.120374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.130834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.140298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.140340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.140357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.140366] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.140375] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.150850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.160361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.160399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.160417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.160427] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.160435] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.170983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.180459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.180499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.180516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.180526] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.180535] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.190882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.348 [2024-07-26 21:33:33.200400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.348 [2024-07-26 21:33:33.200442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.348 [2024-07-26 21:33:33.200458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.348 [2024-07-26 21:33:33.200467] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.348 [2024-07-26 21:33:33.200476] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.348 [2024-07-26 21:33:33.210983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.348 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.220579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.220620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.220646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.220656] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.220665] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.231001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.240585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.240623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.240644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.240654] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.240664] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.251243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.260651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.260692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.260708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.260721] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.260729] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.271208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.280827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.280869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.280886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.280896] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.280905] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.291235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.300797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.300842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.300859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.300869] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.300878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.311351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.320954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.320997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.321014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.321023] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.321032] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.331135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.341066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.341107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.341135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.341145] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.341154] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.351523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.361022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.361065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.361082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.361092] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.361102] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.371609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.381214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.381253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.381271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.381281] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.381289] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.391752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.401170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.401213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.401230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.401240] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.401249] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.411487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.421077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.421117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.421134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.421143] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.421152] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.431580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.441269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.441318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.441338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.441348] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.441357] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.451705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.608 [2024-07-26 21:33:33.461364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.608 [2024-07-26 21:33:33.461409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.608 [2024-07-26 21:33:33.461425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.608 [2024-07-26 21:33:33.461435] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.608 [2024-07-26 21:33:33.461444] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.608 [2024-07-26 21:33:33.471885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.608 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.481431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.481470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.481489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.481499] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.481508] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.491869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.501477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.501517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.501533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.501543] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.501552] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.511912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.521716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.521763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.521779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.521789] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.521801] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.532018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.541755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.541801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.541819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.541828] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.541837] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.552071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.561775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.561818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.561834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.561843] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.561852] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.572137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.581799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.581840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.581857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.581866] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.581875] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.592301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.601914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.601954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.601971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.601980] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.601989] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.612323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.621829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.621869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.621886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.621895] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.621903] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.632376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.641879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.641917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.641934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.641943] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.641952] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.652579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.661972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.662014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.662032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.662042] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.662051] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.672462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.682050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.682100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.682117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.682126] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.682135] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.692687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.702112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.702152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.702169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.702182] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.702191] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.712606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-07-26 21:33:33.722317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-07-26 21:33:33.722355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-07-26 21:33:33.722371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-07-26 21:33:33.722381] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-07-26 21:33:33.722390] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:58.866 [2024-07-26 21:33:33.732591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.866 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.742338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.742381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.742400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.742410] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.742418] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.752780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.762282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.762323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.762340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.762349] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.762358] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.772632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.782343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.782384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.782402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.782411] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.782420] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.792867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.802489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.802531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.802547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.802556] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.802565] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.812908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.822452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.822491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.822507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.822517] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.822525] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.832855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.842576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.842614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.842636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.842646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.842654] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.852983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.862527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.862571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.862587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.862597] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.862605] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.873043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.882533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.882569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.125 [2024-07-26 21:33:33.882589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.125 [2024-07-26 21:33:33.882598] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.125 [2024-07-26 21:33:33.882607] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.125 [2024-07-26 21:33:33.893183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.125 qpair failed and we were unable to recover it. 00:29:59.125 [2024-07-26 21:33:33.902707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.125 [2024-07-26 21:33:33.902751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.126 [2024-07-26 21:33:33.902767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.126 [2024-07-26 21:33:33.902777] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.126 [2024-07-26 21:33:33.902786] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.126 [2024-07-26 21:33:33.913262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.126 qpair failed and we were unable to recover it. 00:29:59.126 [2024-07-26 21:33:33.922622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.126 [2024-07-26 21:33:33.922662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.126 [2024-07-26 21:33:33.922679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.126 [2024-07-26 21:33:33.922688] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.126 [2024-07-26 21:33:33.922697] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.126 [2024-07-26 21:33:33.933206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.126 qpair failed and we were unable to recover it. 00:29:59.126 [2024-07-26 21:33:33.942804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.126 [2024-07-26 21:33:33.942840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.126 [2024-07-26 21:33:33.942857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.126 [2024-07-26 21:33:33.942867] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.126 [2024-07-26 21:33:33.942875] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.126 [2024-07-26 21:33:33.953090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.126 qpair failed and we were unable to recover it. 00:29:59.126 [2024-07-26 21:33:33.962765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.126 [2024-07-26 21:33:33.962803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.126 [2024-07-26 21:33:33.962820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.126 [2024-07-26 21:33:33.962829] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.126 [2024-07-26 21:33:33.962838] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.126 [2024-07-26 21:33:33.973273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.126 qpair failed and we were unable to recover it. 00:29:59.126 [2024-07-26 21:33:33.982890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.126 [2024-07-26 21:33:33.982931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.126 [2024-07-26 21:33:33.982947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.126 [2024-07-26 21:33:33.982957] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.126 [2024-07-26 21:33:33.982965] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.126 [2024-07-26 21:33:33.993428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.126 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.003019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.003068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.003086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.003096] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.003105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.013472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.022971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.023011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.023028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.023037] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.023046] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.033504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.043033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.043069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.043086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.043095] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.043104] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.053331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.063137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.063183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.063200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.063210] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.063218] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.073633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.083122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.083167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.083184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.083193] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.083202] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.093513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.103248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.103284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.103301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.103310] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.103319] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.113578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.123303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.123336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.123352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.123362] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.123370] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.133551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.143312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.143353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.143370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.143379] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.143391] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.153575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.163380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.163424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.163441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.163451] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.163460] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.173654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.183328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.183368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.183384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.385 [2024-07-26 21:33:34.183394] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.385 [2024-07-26 21:33:34.183403] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.385 [2024-07-26 21:33:34.193823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.385 qpair failed and we were unable to recover it. 00:29:59.385 [2024-07-26 21:33:34.203454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.385 [2024-07-26 21:33:34.203496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.385 [2024-07-26 21:33:34.203512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.386 [2024-07-26 21:33:34.203522] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.386 [2024-07-26 21:33:34.203531] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.386 [2024-07-26 21:33:34.213755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.386 qpair failed and we were unable to recover it. 00:29:59.386 [2024-07-26 21:33:34.223483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.386 [2024-07-26 21:33:34.223524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.386 [2024-07-26 21:33:34.223540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.386 [2024-07-26 21:33:34.223550] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.386 [2024-07-26 21:33:34.223558] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.386 [2024-07-26 21:33:34.233799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.386 qpair failed and we were unable to recover it. 00:29:59.386 [2024-07-26 21:33:34.243614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.386 [2024-07-26 21:33:34.243664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.386 [2024-07-26 21:33:34.243681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.386 [2024-07-26 21:33:34.243691] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.386 [2024-07-26 21:33:34.243700] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.254142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.263700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.263734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.263752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.263762] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.263770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.274018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.283756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.283798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.283815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.283825] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.283833] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.294107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.303652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.303691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.303709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.303718] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.303727] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.314116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.323822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.323865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.323884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.323894] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.323902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.334284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.343922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.343961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.343977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.343987] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.343995] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.354125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.363912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.363947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.363963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.363972] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.363981] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.374309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.383927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.383968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.383986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.383996] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.384004] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.394284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.404019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.404068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.404084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.404094] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.404103] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.414679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.424104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.424143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.424160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.424169] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.424178] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.434493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.444113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.444155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.444171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.444180] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.444189] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.454573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.464185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.464225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.464241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.464250] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.464259] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.474538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.484281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.645 [2024-07-26 21:33:34.484320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.645 [2024-07-26 21:33:34.484337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.645 [2024-07-26 21:33:34.484346] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.645 [2024-07-26 21:33:34.484355] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.645 [2024-07-26 21:33:34.494583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.645 qpair failed and we were unable to recover it. 00:29:59.645 [2024-07-26 21:33:34.504412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.646 [2024-07-26 21:33:34.504451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.646 [2024-07-26 21:33:34.504470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.646 [2024-07-26 21:33:34.504480] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.646 [2024-07-26 21:33:34.504489] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.904 [2024-07-26 21:33:34.514680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.524396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.524437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.524457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.524467] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.524476] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.534852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.544463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.544506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.544523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.544533] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.544542] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.554929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.564529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.564571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.564589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.564598] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.564607] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.574902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.584559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.584597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.584615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.584629] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.584641] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.594994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.604722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.604758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.604775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.604784] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.604794] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.615025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.624698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.624739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.624756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.624765] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.624774] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.635091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.644809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.644852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.644869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.644879] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.644888] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.655140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.664835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.664879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.664898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.664907] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.664917] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.675320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.684913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.684950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.684967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.684977] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.684985] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.695306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.704945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.704988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.705004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.705013] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.705022] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.715495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.725013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.725056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.725072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.725081] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.725090] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.735461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.745064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.905 [2024-07-26 21:33:34.745104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.905 [2024-07-26 21:33:34.745120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.905 [2024-07-26 21:33:34.745129] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.905 [2024-07-26 21:33:34.745138] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:59.905 [2024-07-26 21:33:34.755460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.905 qpair failed and we were unable to recover it. 00:29:59.905 [2024-07-26 21:33:34.765166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.906 [2024-07-26 21:33:34.765203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.906 [2024-07-26 21:33:34.765219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.906 [2024-07-26 21:33:34.765232] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.906 [2024-07-26 21:33:34.765240] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.165 [2024-07-26 21:33:34.775492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-26 21:33:34.785168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-26 21:33:34.785211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-26 21:33:34.785230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-26 21:33:34.785240] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-26 21:33:34.785249] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.165 [2024-07-26 21:33:34.795678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-26 21:33:34.805289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-26 21:33:34.805330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-26 21:33:34.805347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-26 21:33:34.805356] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-26 21:33:34.805365] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.165 [2024-07-26 21:33:34.815739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-26 21:33:34.825347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-26 21:33:34.825388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-26 21:33:34.825405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-26 21:33:34.825414] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-26 21:33:34.825423] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.165 [2024-07-26 21:33:34.835727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-26 21:33:34.845411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-26 21:33:34.845449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-26 21:33:34.845466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-26 21:33:34.845476] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-26 21:33:34.845484] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.165 [2024-07-26 21:33:34.855609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-26 21:33:34.865452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-26 21:33:34.865493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-26 21:33:34.865509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-26 21:33:34.865518] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-26 21:33:34.865527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.165 [2024-07-26 21:33:34.875861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-26 21:33:34.885538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-26 21:33:34.885582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-26 21:33:34.885599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-26 21:33:34.885608] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-26 21:33:34.885618] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.166 [2024-07-26 21:33:34.895932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-26 21:33:34.905578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-26 21:33:34.905617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-26 21:33:34.905642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-26 21:33:34.905652] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-26 21:33:34.905661] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.166 [2024-07-26 21:33:34.915921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-26 21:33:34.927787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-26 21:33:34.927832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-26 21:33:34.927848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-26 21:33:34.927859] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-26 21:33:34.927867] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.166 [2024-07-26 21:33:34.936007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-26 21:33:34.945700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-26 21:33:34.945740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-26 21:33:34.945759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-26 21:33:34.945768] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-26 21:33:34.945777] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.166 [2024-07-26 21:33:34.956098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-26 21:33:34.965740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-26 21:33:34.965786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-26 21:33:34.965802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-26 21:33:34.965812] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-26 21:33:34.965821] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.166 [2024-07-26 21:33:34.975956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-26 21:33:34.985671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-26 21:33:34.985714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-26 21:33:34.985732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-26 21:33:34.985741] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-26 21:33:34.985751] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.166 [2024-07-26 21:33:34.996184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-26 21:33:35.005885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-26 21:33:35.005924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-26 21:33:35.005940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-26 21:33:35.005950] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-26 21:33:35.005959] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.166 [2024-07-26 21:33:35.016126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-26 21:33:35.025990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-26 21:33:35.026028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-26 21:33:35.026045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-26 21:33:35.026054] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-26 21:33:35.026076] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.036443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.046032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.046074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.046092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.046102] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.046111] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.056334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.066098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.066142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.066158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.066167] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.066176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.076571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.086119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.086159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.086176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.086186] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.086195] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.096490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.106176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.106225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.106241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.106251] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.106260] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.116504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.126226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.126266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.126282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.126291] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.126300] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.136630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.146349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.146398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.146414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.146423] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.146432] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.156677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.166461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.166497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.166516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.166526] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.166535] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.176696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.186548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.186589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.186606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.186615] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.186629] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.196754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.206607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.206650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.206667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.206680] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.206689] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.216773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.226678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.226719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.226735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.226745] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.226754] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.236862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.246727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.246766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.246782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.246792] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.246801] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.257022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.266770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.266810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.266827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.266836] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.266845] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.425 [2024-07-26 21:33:35.277074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-07-26 21:33:35.286793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-07-26 21:33:35.286836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-07-26 21:33:35.286854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-07-26 21:33:35.286864] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-07-26 21:33:35.286873] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.297158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.306859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.306901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.306919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.306929] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.306939] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.317168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.326961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.327004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.327021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.327031] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.327040] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.337142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.347060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.347100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.347117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.347126] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.347135] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.357299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.366970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.367011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.367028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.367037] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.367046] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.377366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.387198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.387238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.387259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.387268] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.387277] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.397405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.407043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.407084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.407100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.407109] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.407118] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.417563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.427241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.427280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.427296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.427306] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.427314] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.437522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.447393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.447434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.447450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.447460] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.447468] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.457603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.467315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.684 [2024-07-26 21:33:35.467354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.684 [2024-07-26 21:33:35.467371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.684 [2024-07-26 21:33:35.467380] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.684 [2024-07-26 21:33:35.467392] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.684 [2024-07-26 21:33:35.477630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.684 qpair failed and we were unable to recover it. 00:30:00.684 [2024-07-26 21:33:35.487398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.685 [2024-07-26 21:33:35.487439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.685 [2024-07-26 21:33:35.487457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.685 [2024-07-26 21:33:35.487466] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.685 [2024-07-26 21:33:35.487475] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.685 [2024-07-26 21:33:35.497744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.685 qpair failed and we were unable to recover it. 00:30:00.685 [2024-07-26 21:33:35.507398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.685 [2024-07-26 21:33:35.507438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.685 [2024-07-26 21:33:35.507455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.685 [2024-07-26 21:33:35.507464] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.685 [2024-07-26 21:33:35.507473] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.685 [2024-07-26 21:33:35.517843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.685 qpair failed and we were unable to recover it. 00:30:00.685 [2024-07-26 21:33:35.527624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.685 [2024-07-26 21:33:35.527669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.685 [2024-07-26 21:33:35.527686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.685 [2024-07-26 21:33:35.527695] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.685 [2024-07-26 21:33:35.527704] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.685 [2024-07-26 21:33:35.537958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.685 qpair failed and we were unable to recover it. 00:30:00.685 [2024-07-26 21:33:35.547577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.685 [2024-07-26 21:33:35.547629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.685 [2024-07-26 21:33:35.547646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.685 [2024-07-26 21:33:35.547656] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.685 [2024-07-26 21:33:35.547665] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.557874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.567514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.567560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.567577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.567586] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.567596] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.578058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.587574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.587613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.587635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.587644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.587653] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.598149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.607666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.607706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.607722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.607731] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.607740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.618189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.627749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.627796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.627813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.627822] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.627832] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.638298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.647796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.647833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.647851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.647864] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.647873] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.658116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.667901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.667940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.667959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.667969] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.667978] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.678146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.687907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.687951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.687970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.687980] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.687989] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.698317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.707937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.707972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.707989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.707998] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.708007] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.718248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.944 [2024-07-26 21:33:35.728070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.944 [2024-07-26 21:33:35.728105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.944 [2024-07-26 21:33:35.728122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.944 [2024-07-26 21:33:35.728131] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.944 [2024-07-26 21:33:35.728140] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.944 [2024-07-26 21:33:35.738277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.944 qpair failed and we were unable to recover it. 00:30:00.945 [2024-07-26 21:33:35.748110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.945 [2024-07-26 21:33:35.748149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.945 [2024-07-26 21:33:35.748165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.945 [2024-07-26 21:33:35.748175] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.945 [2024-07-26 21:33:35.748184] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.945 [2024-07-26 21:33:35.758587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.945 qpair failed and we were unable to recover it. 00:30:00.945 [2024-07-26 21:33:35.768047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.945 [2024-07-26 21:33:35.768090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.945 [2024-07-26 21:33:35.768107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.945 [2024-07-26 21:33:35.768117] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.945 [2024-07-26 21:33:35.768125] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.945 [2024-07-26 21:33:35.778624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.945 qpair failed and we were unable to recover it. 00:30:00.945 [2024-07-26 21:33:35.788267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.945 [2024-07-26 21:33:35.788312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.945 [2024-07-26 21:33:35.788330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.945 [2024-07-26 21:33:35.788339] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.945 [2024-07-26 21:33:35.788349] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:00.945 [2024-07-26 21:33:35.798647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.945 qpair failed and we were unable to recover it. 00:30:00.945 [2024-07-26 21:33:35.808230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.945 [2024-07-26 21:33:35.808267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.945 [2024-07-26 21:33:35.808284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.945 [2024-07-26 21:33:35.808294] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.945 [2024-07-26 21:33:35.808303] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.203 [2024-07-26 21:33:35.818887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.203 qpair failed and we were unable to recover it. 00:30:01.203 [2024-07-26 21:33:35.828321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.203 [2024-07-26 21:33:35.828362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.203 [2024-07-26 21:33:35.828382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.203 [2024-07-26 21:33:35.828392] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.203 [2024-07-26 21:33:35.828401] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.203 [2024-07-26 21:33:35.838844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.203 qpair failed and we were unable to recover it. 00:30:01.203 [2024-07-26 21:33:35.848407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.203 [2024-07-26 21:33:35.848446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.848462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.848471] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.848480] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.858862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:35.868425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:35.868460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.868477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.868486] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.868495] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.878855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:35.888502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:35.888536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.888553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.888562] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.888571] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.898938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:35.908520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:35.908562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.908578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.908588] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.908597] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.919048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:35.928582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:35.928629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.928645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.928655] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.928664] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.939033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:35.948697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:35.948734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.948750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.948759] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.948768] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.959237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:35.968739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:35.968782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.968799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.968808] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.968817] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.979123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:35.988771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:35.988813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:35.988832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:35.988841] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:35.988850] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:35.999233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:36.008896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:36.008941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:36.008957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:36.008967] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:36.008975] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:36.019267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:36.028951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:36.028990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:36.029006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:36.029015] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:36.029024] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:36.039378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:36.049135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:36.049174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:36.049190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:36.049199] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:36.049208] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.204 [2024-07-26 21:33:36.059596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.204 qpair failed and we were unable to recover it. 00:30:01.204 [2024-07-26 21:33:36.069006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.204 [2024-07-26 21:33:36.069047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.204 [2024-07-26 21:33:36.069064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.204 [2024-07-26 21:33:36.069074] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.204 [2024-07-26 21:33:36.069082] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.079445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.089196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.089235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.089252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.089262] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.089274] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.099656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.109138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.109182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.109198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.109208] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.109217] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.119669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.129159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.129203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.129219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.129229] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.129238] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.139616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.149312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.149351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.149366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.149376] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.149385] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.159520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.169236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.169276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.169295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.169304] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.169313] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.179687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.189378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.189419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.189436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.189446] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.189455] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.199612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.209337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.209382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.209399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.209408] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.209417] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.219956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.229510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.229552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.229568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.229578] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.229586] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.240132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.249541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.249585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.249601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.249610] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.249619] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.259997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.269619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.269666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.269685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.269694] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.269703] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.280109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.289721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.289762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.289781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.289790] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.289800] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.300086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.309792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.309833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.309851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.309861] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-07-26 21:33:36.309870] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.462 [2024-07-26 21:33:36.320210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-07-26 21:33:36.329735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-07-26 21:33:36.329783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-07-26 21:33:36.329801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-07-26 21:33:36.329811] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-07-26 21:33:36.329820] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.340271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.349827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.349868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.349885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.349894] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.349902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.360293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.369949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.369988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.370005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.370014] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.370023] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.380286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.389825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.389866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.389883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.389893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.389902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.400289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.410035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.410081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.410097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.410106] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.410115] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.420338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.430123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.430164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.430180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.430190] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.430199] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.440290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.450075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.450111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.450130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.450140] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.450148] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.460505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.470243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.470282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.470298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.470308] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.470317] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.480744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.490281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.490327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.490344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.490354] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.490363] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.500556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.510320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.510363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.510380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.510390] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.510398] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.520881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.530410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.530451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.530467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.530476] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.530488] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.540901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.550388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.550429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.550446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.550455] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.550464] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.560851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.721 [2024-07-26 21:33:36.570557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.721 [2024-07-26 21:33:36.570601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.721 [2024-07-26 21:33:36.570617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.721 [2024-07-26 21:33:36.570631] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.721 [2024-07-26 21:33:36.570640] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.721 [2024-07-26 21:33:36.580773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.721 qpair failed and we were unable to recover it. 00:30:01.978 [2024-07-26 21:33:36.590459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.590503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.590522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.590532] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.590541] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.600967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.610646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.610687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.610704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.610714] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.610723] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.621064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.630715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.630764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.630781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.630790] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.630799] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.641103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.650771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.650815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.650831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.650841] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.650850] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.660997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.670875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.670919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.670936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.670947] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.670955] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.681225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.690861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.690900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.690917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.690926] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.690935] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.701336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.711001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.711042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.711060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.711073] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.711081] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.721267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.731056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.731099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.731116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.731126] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.731135] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.741481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.751133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.751180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.751196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.751206] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.751214] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.761463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.771157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.771192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.771209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.771218] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.771227] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.781451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.791164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.791205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.791222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.791231] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.791240] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.801546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.811235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.811277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.811294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.811305] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.811314] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.821811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:01.979 [2024-07-26 21:33:36.831471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.979 [2024-07-26 21:33:36.831513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.979 [2024-07-26 21:33:36.831530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.979 [2024-07-26 21:33:36.831540] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.979 [2024-07-26 21:33:36.831549] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:01.979 [2024-07-26 21:33:36.841811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.979 qpair failed and we were unable to recover it. 00:30:02.238 [2024-07-26 21:33:36.851407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.238 [2024-07-26 21:33:36.851448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.238 [2024-07-26 21:33:36.851467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.238 [2024-07-26 21:33:36.851478] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.238 [2024-07-26 21:33:36.851488] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.238 [2024-07-26 21:33:36.861747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.238 qpair failed and we were unable to recover it. 00:30:02.238 [2024-07-26 21:33:36.871485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.238 [2024-07-26 21:33:36.871523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.238 [2024-07-26 21:33:36.871540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.238 [2024-07-26 21:33:36.871550] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.238 [2024-07-26 21:33:36.871559] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.238 [2024-07-26 21:33:36.881666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.238 qpair failed and we were unable to recover it. 00:30:02.238 [2024-07-26 21:33:36.891552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.238 [2024-07-26 21:33:36.891603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.238 [2024-07-26 21:33:36.891629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:36.891640] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:36.891648] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:36.901879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:36.911615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:36.911659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:36.911677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:36.911686] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:36.911696] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:36.922099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:36.931737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:36.931781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:36.931798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:36.931808] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:36.931817] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:36.942122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:36.951777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:36.951817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:36.951834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:36.951844] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:36.951853] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:36.962150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:36.971891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:36.971937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:36.971953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:36.971964] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:36.971976] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:36.982153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:36.991950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:36.991990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:36.992008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:36.992018] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:36.992026] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:37.002110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:37.011970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:37.012012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:37.012029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:37.012039] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:37.012047] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:37.022018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:37.031897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:37.031936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:37.031952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:37.031961] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:37.031970] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:37.042227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:37.051950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:37.051989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:37.052005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:37.052015] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:37.052024] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:37.062385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:37.072139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:37.072182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:37.072199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:37.072208] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:37.072217] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:37.082386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.239 [2024-07-26 21:33:37.092183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.239 [2024-07-26 21:33:37.092224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.239 [2024-07-26 21:33:37.092241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.239 [2024-07-26 21:33:37.092251] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.239 [2024-07-26 21:33:37.092261] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.239 [2024-07-26 21:33:37.102458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.239 qpair failed and we were unable to recover it. 00:30:02.535 [2024-07-26 21:33:37.112094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.535 [2024-07-26 21:33:37.112135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.535 [2024-07-26 21:33:37.112154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.535 [2024-07-26 21:33:37.112163] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.535 [2024-07-26 21:33:37.112172] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.535 [2024-07-26 21:33:37.122544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.535 qpair failed and we were unable to recover it. 00:30:02.535 [2024-07-26 21:33:37.132293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.535 [2024-07-26 21:33:37.132333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.535 [2024-07-26 21:33:37.132349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.535 [2024-07-26 21:33:37.132359] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.535 [2024-07-26 21:33:37.132368] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.535 [2024-07-26 21:33:37.142674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.535 qpair failed and we were unable to recover it. 00:30:02.535 [2024-07-26 21:33:37.152315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.535 [2024-07-26 21:33:37.152359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.535 [2024-07-26 21:33:37.152376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.535 [2024-07-26 21:33:37.152389] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.535 [2024-07-26 21:33:37.152397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.535 [2024-07-26 21:33:37.162611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.535 qpair failed and we were unable to recover it. 00:30:02.535 [2024-07-26 21:33:37.172366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.536 [2024-07-26 21:33:37.172406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.536 [2024-07-26 21:33:37.172425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.536 [2024-07-26 21:33:37.172435] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.536 [2024-07-26 21:33:37.172443] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.536 [2024-07-26 21:33:37.182777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.536 qpair failed and we were unable to recover it. 00:30:02.536 [2024-07-26 21:33:37.192455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.536 [2024-07-26 21:33:37.192495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.536 [2024-07-26 21:33:37.192512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.536 [2024-07-26 21:33:37.192522] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.536 [2024-07-26 21:33:37.192531] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:02.536 [2024-07-26 21:33:37.202877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.536 qpair failed and we were unable to recover it. 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Write completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 Read completed with error (sct=0, sc=8) 00:30:03.508 starting I/O failed 00:30:03.508 [2024-07-26 21:33:38.207910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.508 [2024-07-26 21:33:38.214982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.508 [2024-07-26 21:33:38.215026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.508 [2024-07-26 21:33:38.215045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.508 [2024-07-26 21:33:38.215055] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.508 [2024-07-26 21:33:38.215064] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:30:03.508 [2024-07-26 21:33:38.225854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.508 qpair failed and we were unable to recover it. 00:30:03.508 [2024-07-26 21:33:38.235467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.508 [2024-07-26 21:33:38.235509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.508 [2024-07-26 21:33:38.235526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.508 [2024-07-26 21:33:38.235536] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.508 [2024-07-26 21:33:38.235545] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:30:03.508 [2024-07-26 21:33:38.245715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.508 qpair failed and we were unable to recover it. 00:30:03.508 [2024-07-26 21:33:38.245856] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:03.508 A controller has encountered a failure and is being reset. 00:30:03.508 [2024-07-26 21:33:38.245974] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:03.508 [2024-07-26 21:33:38.279518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:03.508 Controller properly reset. 00:30:03.508 Initializing NVMe Controllers 00:30:03.508 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.508 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.508 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:03.508 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:03.508 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:03.508 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:03.508 Initialization complete. Launching workers. 00:30:03.508 Starting thread on core 1 00:30:03.508 Starting thread on core 2 00:30:03.508 Starting thread on core 3 00:30:03.508 Starting thread on core 0 00:30:03.508 21:33:38 -- host/target_disconnect.sh@59 -- # sync 00:30:03.508 00:30:03.508 real 0m12.511s 00:30:03.508 user 0m26.951s 00:30:03.508 sys 0m3.210s 00:30:03.508 21:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.508 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:30:03.508 ************************************ 00:30:03.508 END TEST nvmf_target_disconnect_tc2 00:30:03.508 ************************************ 00:30:03.767 21:33:38 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:30:03.767 21:33:38 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:30:03.767 21:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:03.767 21:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:03.767 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:30:03.767 ************************************ 00:30:03.767 START TEST nvmf_target_disconnect_tc3 00:30:03.767 ************************************ 00:30:03.767 21:33:38 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc3 00:30:03.767 21:33:38 -- host/target_disconnect.sh@65 -- # reconnectpid=1854864 00:30:03.767 21:33:38 -- host/target_disconnect.sh@67 -- # sleep 2 00:30:03.767 21:33:38 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:30:03.767 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.671 21:33:40 -- host/target_disconnect.sh@68 -- # kill -9 1853717 00:30:05.671 21:33:40 -- host/target_disconnect.sh@70 -- # sleep 2 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Read completed with error (sct=0, sc=8) 00:30:07.049 starting I/O failed 00:30:07.049 Write completed with error (sct=0, sc=8) 00:30:07.050 starting I/O failed 00:30:07.050 Read completed with error (sct=0, sc=8) 00:30:07.050 starting I/O failed 00:30:07.050 Read completed with error (sct=0, sc=8) 00:30:07.050 starting I/O failed 00:30:07.050 [2024-07-26 21:33:41.612566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.618 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 1853717 Killed "${NVMF_APP[@]}" "$@" 00:30:07.618 21:33:42 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:30:07.618 21:33:42 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:07.618 21:33:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:07.618 21:33:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:07.618 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:30:07.618 21:33:42 -- nvmf/common.sh@469 -- # nvmfpid=1855665 00:30:07.618 21:33:42 -- nvmf/common.sh@470 -- # waitforlisten 1855665 00:30:07.618 21:33:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:07.618 21:33:42 -- common/autotest_common.sh@819 -- # '[' -z 1855665 ']' 00:30:07.618 21:33:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.618 21:33:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:07.618 21:33:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.618 21:33:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:07.618 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:30:07.618 [2024-07-26 21:33:42.473054] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:07.618 [2024-07-26 21:33:42.473107] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.877 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.877 [2024-07-26 21:33:42.576726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.877 [2024-07-26 21:33:42.612751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:07.877 [2024-07-26 21:33:42.612868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.877 [2024-07-26 21:33:42.612878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.877 [2024-07-26 21:33:42.612887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.877 [2024-07-26 21:33:42.613010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:07.877 [2024-07-26 21:33:42.613117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:07.877 [2024-07-26 21:33:42.613227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:07.877 [2024-07-26 21:33:42.613226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:07.877 Read completed with error (sct=0, sc=8) 00:30:07.877 starting I/O failed 00:30:07.877 Read completed with error (sct=0, sc=8) 00:30:07.877 starting I/O failed 00:30:07.877 Read completed with error (sct=0, sc=8) 00:30:07.877 starting I/O failed 00:30:07.877 Read completed with error (sct=0, sc=8) 00:30:07.877 starting I/O failed 00:30:07.877 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Write completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 Read completed with error (sct=0, sc=8) 00:30:07.878 starting I/O failed 00:30:07.878 [2024-07-26 21:33:42.617882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.446 21:33:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:08.446 21:33:43 -- common/autotest_common.sh@852 -- # return 0 00:30:08.446 21:33:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:08.446 21:33:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:08.446 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.704 21:33:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.704 21:33:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:08.704 21:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.704 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.704 Malloc0 00:30:08.704 21:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.704 21:33:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:08.704 21:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.704 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.704 [2024-07-26 21:33:43.367325] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x933820/0x93f440) succeed. 00:30:08.705 [2024-07-26 21:33:43.378027] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x934e10/0x9bf480) succeed. 00:30:08.705 21:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.705 21:33:43 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.705 21:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.705 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.705 21:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.705 21:33:43 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.705 21:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.705 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.705 21:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.705 21:33:43 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:30:08.705 21:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.705 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.705 [2024-07-26 21:33:43.525037] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:30:08.705 21:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.705 21:33:43 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:30:08.705 21:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.705 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.705 21:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.705 21:33:43 -- host/target_disconnect.sh@73 -- # wait 1854864 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Write completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 Read completed with error (sct=0, sc=8) 00:30:08.963 starting I/O failed 00:30:08.963 [2024-07-26 21:33:43.623019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.963 [2024-07-26 21:33:43.624540] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:08.963 [2024-07-26 21:33:43.624562] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:08.963 [2024-07-26 21:33:43.624571] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:09.899 [2024-07-26 21:33:44.628399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.899 qpair failed and we were unable to recover it. 00:30:09.899 [2024-07-26 21:33:44.629982] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:09.899 [2024-07-26 21:33:44.629999] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:09.899 [2024-07-26 21:33:44.630008] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:10.837 [2024-07-26 21:33:45.633761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.837 qpair failed and we were unable to recover it. 00:30:10.837 [2024-07-26 21:33:45.635187] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:10.837 [2024-07-26 21:33:45.635205] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:10.837 [2024-07-26 21:33:45.635214] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:11.775 [2024-07-26 21:33:46.638980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.775 qpair failed and we were unable to recover it. 00:30:11.775 [2024-07-26 21:33:46.640454] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:11.775 [2024-07-26 21:33:46.640471] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:11.775 [2024-07-26 21:33:46.640480] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:13.153 [2024-07-26 21:33:47.644370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.153 qpair failed and we were unable to recover it. 00:30:13.153 [2024-07-26 21:33:47.645954] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.153 [2024-07-26 21:33:47.645971] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.153 [2024-07-26 21:33:47.645980] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:14.091 [2024-07-26 21:33:48.649847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.091 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-26 21:33:48.651305] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:14.091 [2024-07-26 21:33:48.651322] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:14.091 [2024-07-26 21:33:48.651331] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:15.029 [2024-07-26 21:33:49.655165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-26 21:33:49.656646] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:15.029 [2024-07-26 21:33:49.656667] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:15.029 [2024-07-26 21:33:49.656677] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:30:15.967 [2024-07-26 21:33:50.660587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-26 21:33:50.662206] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:15.967 [2024-07-26 21:33:50.662229] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:15.967 [2024-07-26 21:33:50.662238] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:30:16.904 [2024-07-26 21:33:51.666122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-26 21:33:51.667569] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:16.904 [2024-07-26 21:33:51.667586] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:16.904 [2024-07-26 21:33:51.667594] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:30:17.842 [2024-07-26 21:33:52.671463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.842 qpair failed and we were unable to recover it. 00:30:17.842 [2024-07-26 21:33:52.671568] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:17.842 A controller has encountered a failure and is being reset. 00:30:17.842 Resorting to new failover address 192.168.100.9 00:30:17.842 [2024-07-26 21:33:52.671692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.842 [2024-07-26 21:33:52.671760] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:17.842 [2024-07-26 21:33:52.673951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:17.842 Controller properly reset. 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Read completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 Write completed with error (sct=0, sc=8) 00:30:19.219 starting I/O failed 00:30:19.219 [2024-07-26 21:33:53.718282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.219 Initializing NVMe Controllers 00:30:19.219 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.219 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.219 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:19.219 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:19.219 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:19.220 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:19.220 Initialization complete. Launching workers. 00:30:19.220 Starting thread on core 1 00:30:19.220 Starting thread on core 2 00:30:19.220 Starting thread on core 3 00:30:19.220 Starting thread on core 0 00:30:19.220 21:33:53 -- host/target_disconnect.sh@74 -- # sync 00:30:19.220 00:30:19.220 real 0m15.365s 00:30:19.220 user 0m55.997s 00:30:19.220 sys 0m4.968s 00:30:19.220 21:33:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.220 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:30:19.220 ************************************ 00:30:19.220 END TEST nvmf_target_disconnect_tc3 00:30:19.220 ************************************ 00:30:19.220 21:33:53 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:19.220 21:33:53 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:19.220 21:33:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:19.220 21:33:53 -- nvmf/common.sh@116 -- # sync 00:30:19.220 21:33:53 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:30:19.220 21:33:53 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:30:19.220 21:33:53 -- nvmf/common.sh@119 -- # set +e 00:30:19.220 21:33:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:19.220 21:33:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:30:19.220 rmmod nvme_rdma 00:30:19.220 rmmod nvme_fabrics 00:30:19.220 21:33:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:19.220 21:33:53 -- nvmf/common.sh@123 -- # set -e 00:30:19.220 21:33:53 -- nvmf/common.sh@124 -- # return 0 00:30:19.220 21:33:53 -- nvmf/common.sh@477 -- # '[' -n 1855665 ']' 00:30:19.220 21:33:53 -- nvmf/common.sh@478 -- # killprocess 1855665 00:30:19.220 21:33:53 -- common/autotest_common.sh@926 -- # '[' -z 1855665 ']' 00:30:19.220 21:33:53 -- common/autotest_common.sh@930 -- # kill -0 1855665 00:30:19.220 21:33:53 -- common/autotest_common.sh@931 -- # uname 00:30:19.220 21:33:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:19.220 21:33:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1855665 00:30:19.220 21:33:53 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:19.220 21:33:53 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:19.220 21:33:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1855665' 00:30:19.220 killing process with pid 1855665 00:30:19.220 21:33:53 -- common/autotest_common.sh@945 -- # kill 1855665 00:30:19.220 21:33:53 -- common/autotest_common.sh@950 -- # wait 1855665 00:30:19.479 21:33:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:19.479 21:33:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:30:19.479 00:30:19.479 real 0m37.924s 00:30:19.479 user 2m11.922s 00:30:19.479 sys 0m15.239s 00:30:19.479 21:33:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.479 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 ************************************ 00:30:19.479 END TEST nvmf_target_disconnect 00:30:19.479 ************************************ 00:30:19.479 21:33:54 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:19.479 21:33:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:19.479 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 21:33:54 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:19.479 00:30:19.479 real 21m56.954s 00:30:19.479 user 67m25.765s 00:30:19.479 sys 5m42.681s 00:30:19.479 21:33:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.479 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 ************************************ 00:30:19.479 END TEST nvmf_rdma 00:30:19.479 ************************************ 00:30:19.479 21:33:54 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:30:19.479 21:33:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:19.479 21:33:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:19.479 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 ************************************ 00:30:19.479 START TEST spdkcli_nvmf_rdma 00:30:19.479 ************************************ 00:30:19.479 21:33:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:30:19.737 * Looking for test storage... 00:30:19.737 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:30:19.737 21:33:54 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:30:19.737 21:33:54 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:19.737 21:33:54 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:30:19.737 21:33:54 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.737 21:33:54 -- nvmf/common.sh@7 -- # uname -s 00:30:19.737 21:33:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.737 21:33:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.737 21:33:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.737 21:33:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.737 21:33:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.737 21:33:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.737 21:33:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.737 21:33:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.737 21:33:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.737 21:33:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.737 21:33:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:19.737 21:33:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:19.737 21:33:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.737 21:33:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.737 21:33:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.737 21:33:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:19.737 21:33:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.737 21:33:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.737 21:33:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.737 21:33:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.737 21:33:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.737 21:33:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.737 21:33:54 -- paths/export.sh@5 -- # export PATH 00:30:19.737 21:33:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.737 21:33:54 -- nvmf/common.sh@46 -- # : 0 00:30:19.737 21:33:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:19.737 21:33:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:19.737 21:33:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:19.737 21:33:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.737 21:33:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.737 21:33:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:19.737 21:33:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:19.737 21:33:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:19.737 21:33:54 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:19.737 21:33:54 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:19.737 21:33:54 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:19.737 21:33:54 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:19.737 21:33:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:19.737 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:30:19.737 21:33:54 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:19.737 21:33:54 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1857682 00:30:19.737 21:33:54 -- spdkcli/common.sh@34 -- # waitforlisten 1857682 00:30:19.737 21:33:54 -- common/autotest_common.sh@819 -- # '[' -z 1857682 ']' 00:30:19.737 21:33:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.737 21:33:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:19.737 21:33:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.738 21:33:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:19.738 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:30:19.738 21:33:54 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:19.738 [2024-07-26 21:33:54.473734] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:19.738 [2024-07-26 21:33:54.473788] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857682 ] 00:30:19.738 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.738 [2024-07-26 21:33:54.555715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.738 [2024-07-26 21:33:54.594316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:19.738 [2024-07-26 21:33:54.594445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.738 [2024-07-26 21:33:54.594448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.674 21:33:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:20.674 21:33:55 -- common/autotest_common.sh@852 -- # return 0 00:30:20.674 21:33:55 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:20.674 21:33:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:20.674 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:30:20.674 21:33:55 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:20.674 21:33:55 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:30:20.674 21:33:55 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:30:20.674 21:33:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:30:20.674 21:33:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.674 21:33:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:20.674 21:33:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:20.674 21:33:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:20.674 21:33:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.674 21:33:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:20.674 21:33:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.674 21:33:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:20.674 21:33:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:20.674 21:33:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:20.674 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:30:28.795 21:34:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:28.795 21:34:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:28.795 21:34:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:28.795 21:34:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:28.795 21:34:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:28.795 21:34:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:28.795 21:34:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:28.795 21:34:03 -- nvmf/common.sh@294 -- # net_devs=() 00:30:28.795 21:34:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:28.795 21:34:03 -- nvmf/common.sh@295 -- # e810=() 00:30:28.795 21:34:03 -- nvmf/common.sh@295 -- # local -ga e810 00:30:28.795 21:34:03 -- nvmf/common.sh@296 -- # x722=() 00:30:28.795 21:34:03 -- nvmf/common.sh@296 -- # local -ga x722 00:30:28.795 21:34:03 -- nvmf/common.sh@297 -- # mlx=() 00:30:28.795 21:34:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:28.795 21:34:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.795 21:34:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:28.795 21:34:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:30:28.795 21:34:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:30:28.795 21:34:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:30:28.795 21:34:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:30:28.795 21:34:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:30:28.795 21:34:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:28.795 21:34:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.795 21:34:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:28.795 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:28.795 21:34:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:30:28.795 21:34:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:30:28.796 21:34:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:28.796 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:28.796 21:34:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:30:28.796 21:34:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:28.796 21:34:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.796 21:34:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.796 21:34:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.796 21:34:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:28.796 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:28.796 21:34:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.796 21:34:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.796 21:34:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.796 21:34:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.796 21:34:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:28.796 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:28.796 21:34:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.796 21:34:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:28.796 21:34:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:28.796 21:34:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:30:28.796 21:34:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:30:28.796 21:34:03 -- nvmf/common.sh@57 -- # uname 00:30:28.796 21:34:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:30:28.796 21:34:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:30:28.796 21:34:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:30:28.796 21:34:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:30:28.796 21:34:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:30:28.796 21:34:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:30:28.796 21:34:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:30:28.796 21:34:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:30:28.796 21:34:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:30:28.796 21:34:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:28.796 21:34:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:30:28.796 21:34:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:28.796 21:34:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:30:28.796 21:34:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:30:28.796 21:34:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:28.796 21:34:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:30:28.796 21:34:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:30:28.796 21:34:03 -- nvmf/common.sh@104 -- # continue 2 00:30:28.796 21:34:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.796 21:34:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:30:28.796 21:34:03 -- nvmf/common.sh@104 -- # continue 2 00:30:28.796 21:34:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:30:28.796 21:34:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:30:28.796 21:34:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:30:28.796 21:34:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:30:28.796 21:34:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:30:28.796 21:34:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:30:28.796 21:34:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:30:28.796 21:34:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:30:28.796 21:34:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:30:29.055 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:29.055 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:29.055 altname enp217s0f0np0 00:30:29.055 altname ens818f0np0 00:30:29.055 inet 192.168.100.8/24 scope global mlx_0_0 00:30:29.055 valid_lft forever preferred_lft forever 00:30:29.055 21:34:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:30:29.055 21:34:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:30:29.055 21:34:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:30:29.055 21:34:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:30:29.055 21:34:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:30:29.055 21:34:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:30:29.055 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:29.055 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:29.055 altname enp217s0f1np1 00:30:29.055 altname ens818f1np1 00:30:29.055 inet 192.168.100.9/24 scope global mlx_0_1 00:30:29.055 valid_lft forever preferred_lft forever 00:30:29.055 21:34:03 -- nvmf/common.sh@410 -- # return 0 00:30:29.055 21:34:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:29.055 21:34:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:29.055 21:34:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:30:29.055 21:34:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:30:29.055 21:34:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:30:29.055 21:34:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:29.055 21:34:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:30:29.055 21:34:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:30:29.055 21:34:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:29.055 21:34:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:30:29.055 21:34:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:30:29.055 21:34:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:29.055 21:34:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:29.055 21:34:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:30:29.055 21:34:03 -- nvmf/common.sh@104 -- # continue 2 00:30:29.055 21:34:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:30:29.055 21:34:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:29.055 21:34:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:29.055 21:34:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:29.055 21:34:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:29.055 21:34:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:30:29.055 21:34:03 -- nvmf/common.sh@104 -- # continue 2 00:30:29.055 21:34:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:30:29.055 21:34:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:30:29.055 21:34:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:30:29.055 21:34:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:30:29.055 21:34:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:30:29.055 21:34:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:30:29.055 21:34:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:30:29.055 21:34:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:30:29.055 192.168.100.9' 00:30:29.055 21:34:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:30:29.055 192.168.100.9' 00:30:29.055 21:34:03 -- nvmf/common.sh@445 -- # head -n 1 00:30:29.055 21:34:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:29.055 21:34:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:30:29.055 192.168.100.9' 00:30:29.055 21:34:03 -- nvmf/common.sh@446 -- # tail -n +2 00:30:29.055 21:34:03 -- nvmf/common.sh@446 -- # head -n 1 00:30:29.055 21:34:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:29.055 21:34:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:30:29.055 21:34:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:29.055 21:34:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:30:29.055 21:34:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:30:29.055 21:34:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:30:29.055 21:34:03 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:30:29.055 21:34:03 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:29.055 21:34:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:29.055 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:30:29.055 21:34:03 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:29.055 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:29.055 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:29.055 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:29.055 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:29.055 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:29.055 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:29.055 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:30:29.055 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:30:29.055 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:29.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:29.055 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:29.055 ' 00:30:29.314 [2024-07-26 21:34:04.144364] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:31.847 [2024-07-26 21:34:06.207285] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1185d70/0x11092c0) succeed. 00:30:31.847 [2024-07-26 21:34:06.217336] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1187450/0x1189300) succeed. 00:30:32.784 [2024-07-26 21:34:07.459536] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:30:35.321 [2024-07-26 21:34:09.642516] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:30:36.699 [2024-07-26 21:34:11.520766] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:30:38.118 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:38.118 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:38.118 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:38.118 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:38.118 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:38.118 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:38.118 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:38.118 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:38.118 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:38.118 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:38.118 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:30:38.118 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.118 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:38.118 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:30:38.118 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:38.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:38.119 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:38.378 21:34:13 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:38.378 21:34:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:38.378 21:34:13 -- common/autotest_common.sh@10 -- # set +x 00:30:38.378 21:34:13 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:38.378 21:34:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:38.378 21:34:13 -- common/autotest_common.sh@10 -- # set +x 00:30:38.378 21:34:13 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:38.378 21:34:13 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:38.636 21:34:13 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:38.895 21:34:13 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:38.895 21:34:13 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:38.895 21:34:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:38.895 21:34:13 -- common/autotest_common.sh@10 -- # set +x 00:30:38.895 21:34:13 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:38.895 21:34:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:38.895 21:34:13 -- common/autotest_common.sh@10 -- # set +x 00:30:38.895 21:34:13 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:38.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:38.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:38.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:38.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:30:38.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:30:38.895 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:38.895 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:38.895 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:38.895 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:38.895 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:38.895 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:38.895 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:38.895 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:38.895 ' 00:30:44.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:44.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:44.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:44.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:44.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:30:44.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:30:44.169 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:44.169 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:44.169 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:44.169 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:44.169 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:44.169 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:44.169 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:44.169 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:44.169 21:34:18 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:44.169 21:34:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:44.169 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.169 21:34:18 -- spdkcli/nvmf.sh@90 -- # killprocess 1857682 00:30:44.169 21:34:18 -- common/autotest_common.sh@926 -- # '[' -z 1857682 ']' 00:30:44.169 21:34:18 -- common/autotest_common.sh@930 -- # kill -0 1857682 00:30:44.169 21:34:18 -- common/autotest_common.sh@931 -- # uname 00:30:44.169 21:34:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:44.169 21:34:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1857682 00:30:44.169 21:34:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:44.169 21:34:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:44.169 21:34:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1857682' 00:30:44.169 killing process with pid 1857682 00:30:44.169 21:34:18 -- common/autotest_common.sh@945 -- # kill 1857682 00:30:44.169 [2024-07-26 21:34:18.644753] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:44.169 21:34:18 -- common/autotest_common.sh@950 -- # wait 1857682 00:30:44.169 21:34:18 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:30:44.169 21:34:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:44.169 21:34:18 -- nvmf/common.sh@116 -- # sync 00:30:44.169 21:34:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:30:44.169 21:34:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:30:44.169 21:34:18 -- nvmf/common.sh@119 -- # set +e 00:30:44.169 21:34:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:44.169 21:34:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:30:44.169 rmmod nvme_rdma 00:30:44.169 rmmod nvme_fabrics 00:30:44.169 21:34:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:44.169 21:34:18 -- nvmf/common.sh@123 -- # set -e 00:30:44.169 21:34:18 -- nvmf/common.sh@124 -- # return 0 00:30:44.169 21:34:18 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:30:44.169 21:34:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:44.169 21:34:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:30:44.169 00:30:44.169 real 0m24.587s 00:30:44.169 user 0m52.398s 00:30:44.169 sys 0m7.342s 00:30:44.169 21:34:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.169 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.169 ************************************ 00:30:44.169 END TEST spdkcli_nvmf_rdma 00:30:44.169 ************************************ 00:30:44.169 21:34:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:44.169 21:34:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:44.169 21:34:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:44.169 21:34:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:44.169 21:34:18 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:30:44.169 21:34:18 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:30:44.169 21:34:18 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:30:44.169 21:34:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:44.169 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.169 21:34:18 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:30:44.169 21:34:18 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:30:44.169 21:34:18 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:30:44.169 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:30:50.740 INFO: APP EXITING 00:30:50.740 INFO: killing all VMs 00:30:50.740 INFO: killing vhost app 00:30:50.740 WARN: no vhost pid file found 00:30:50.740 INFO: EXIT DONE 00:30:54.929 Waiting for block devices as requested 00:30:54.929 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:54.929 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:55.187 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:55.187 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:55.187 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:55.187 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:55.446 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:55.446 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:55.446 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:55.705 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:59.899 Cleaning 00:30:59.899 Removing: /var/run/dpdk/spdk0/config 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:59.899 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:59.899 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:59.899 Removing: /var/run/dpdk/spdk1/config 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:59.899 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:59.899 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:59.899 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:59.899 Removing: /var/run/dpdk/spdk2/config 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:59.899 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:59.899 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:59.899 Removing: /var/run/dpdk/spdk3/config 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:59.899 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:59.899 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:59.899 Removing: /var/run/dpdk/spdk4/config 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:59.899 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:59.899 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:59.899 Removing: /dev/shm/bdevperf_trace.pid1672958 00:30:59.899 Removing: /dev/shm/bdevperf_trace.pid1775820 00:30:59.899 Removing: /dev/shm/bdev_svc_trace.1 00:30:59.899 Removing: /dev/shm/nvmf_trace.0 00:30:59.899 Removing: /dev/shm/spdk_tgt_trace.pid1498970 00:30:59.899 Removing: /var/run/dpdk/spdk0 00:30:59.899 Removing: /var/run/dpdk/spdk1 00:30:59.899 Removing: /var/run/dpdk/spdk2 00:30:59.899 Removing: /var/run/dpdk/spdk3 00:30:59.899 Removing: /var/run/dpdk/spdk4 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1496440 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1497710 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1498970 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1499671 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1505555 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1507028 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1507325 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1507638 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1507984 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1508232 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1508380 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1508658 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1508966 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1509833 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1513024 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1513320 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1513627 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1513891 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1514220 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1514477 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1515060 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1515080 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1515412 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1515644 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1515911 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1515955 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1516526 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1516703 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1516951 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1517241 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1517316 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1517573 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1517762 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1517965 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1518147 00:30:59.899 Removing: /var/run/dpdk/spdk_pid1518431 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1518711 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1518993 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1519261 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1519527 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1519698 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1519873 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1520128 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1520410 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1520678 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1520961 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1521233 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1521460 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1521607 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1521829 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1522095 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1522379 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1522651 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1522934 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1523156 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1523355 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1523519 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1523794 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1524062 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1524349 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1524620 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1524905 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1525118 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1525317 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1525488 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1525775 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1526046 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1526335 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1526607 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1526890 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1527089 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1527303 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1527509 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1527852 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1533007 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1634668 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1639677 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1650609 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1656550 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1661495 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1662312 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1672958 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1673244 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1678032 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1684692 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1687446 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1698931 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1727717 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1731967 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1737332 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1773787 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1774821 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1775820 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1780768 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1789368 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1790308 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1791260 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1792097 00:31:00.158 Removing: /var/run/dpdk/spdk_pid1792607 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1797700 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1797771 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1802974 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1803515 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1804139 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1804865 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1804889 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1807858 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1809741 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1811623 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1813508 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1815403 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1817292 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1824216 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1824876 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1827191 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1828217 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1835984 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1838857 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1845063 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1845340 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1852609 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1852935 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1854864 00:31:00.417 Removing: /var/run/dpdk/spdk_pid1857682 00:31:00.417 Clean 00:31:00.417 killing process with pid 1439974 00:31:18.589 killing process with pid 1439970 00:31:18.589 killing process with pid 1439972 00:31:18.589 killing process with pid 1439971 00:31:18.589 21:34:51 -- common/autotest_common.sh@1436 -- # return 0 00:31:18.589 21:34:51 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:31:18.589 21:34:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:18.589 21:34:51 -- common/autotest_common.sh@10 -- # set +x 00:31:18.589 21:34:51 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:31:18.589 21:34:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:18.589 21:34:51 -- common/autotest_common.sh@10 -- # set +x 00:31:18.589 21:34:51 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:31:18.589 21:34:51 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:31:18.589 21:34:51 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:31:18.589 21:34:51 -- spdk/autotest.sh@394 -- # hash lcov 00:31:18.589 21:34:51 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:18.589 21:34:51 -- spdk/autotest.sh@396 -- # hostname 00:31:18.589 21:34:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:31:18.589 geninfo: WARNING: invalid characters removed from testname! 00:31:36.681 21:35:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:36.939 21:35:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:38.841 21:35:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:40.218 21:35:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:41.596 21:35:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:43.502 21:35:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:44.884 21:35:19 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:44.884 21:35:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:44.884 21:35:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:44.884 21:35:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.884 21:35:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.884 21:35:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 21:35:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 21:35:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 21:35:19 -- paths/export.sh@5 -- $ export PATH 00:31:44.884 21:35:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.884 21:35:19 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:31:44.884 21:35:19 -- common/autobuild_common.sh@438 -- $ date +%s 00:31:44.884 21:35:19 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722022519.XXXXXX 00:31:44.884 21:35:19 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722022519.O6MVru 00:31:44.884 21:35:19 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:31:44.884 21:35:19 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:31:44.884 21:35:19 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:31:44.884 21:35:19 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:31:44.884 21:35:19 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:44.884 21:35:19 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:44.884 21:35:19 -- common/autobuild_common.sh@454 -- $ get_config_params 00:31:44.884 21:35:19 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:44.884 21:35:19 -- common/autotest_common.sh@10 -- $ set +x 00:31:44.884 21:35:19 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:31:44.884 21:35:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:31:44.884 21:35:19 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:44.884 21:35:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:44.884 21:35:19 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:44.884 21:35:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:44.884 21:35:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:44.884 21:35:19 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:44.884 21:35:19 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:44.884 21:35:19 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:31:44.884 21:35:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:44.884 + [[ -n 1385708 ]] 00:31:44.884 + sudo kill 1385708 00:31:44.895 [Pipeline] } 00:31:44.915 [Pipeline] // stage 00:31:44.921 [Pipeline] } 00:31:44.938 [Pipeline] // timeout 00:31:44.943 [Pipeline] } 00:31:44.960 [Pipeline] // catchError 00:31:44.966 [Pipeline] } 00:31:44.984 [Pipeline] // wrap 00:31:44.991 [Pipeline] } 00:31:45.007 [Pipeline] // catchError 00:31:45.018 [Pipeline] stage 00:31:45.020 [Pipeline] { (Epilogue) 00:31:45.035 [Pipeline] catchError 00:31:45.037 [Pipeline] { 00:31:45.051 [Pipeline] echo 00:31:45.053 Cleanup processes 00:31:45.059 [Pipeline] sh 00:31:45.344 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:45.344 1880328 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:45.357 [Pipeline] sh 00:31:45.702 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:45.702 ++ grep -v 'sudo pgrep' 00:31:45.702 ++ awk '{print $1}' 00:31:45.702 + sudo kill -9 00:31:45.702 + true 00:31:45.713 [Pipeline] sh 00:31:45.996 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:45.996 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:31:51.267 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:31:54.566 [Pipeline] sh 00:31:54.852 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:54.852 Artifacts sizes are good 00:31:54.867 [Pipeline] archiveArtifacts 00:31:54.874 Archiving artifacts 00:31:55.078 [Pipeline] sh 00:31:55.363 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:31:55.378 [Pipeline] cleanWs 00:31:55.388 [WS-CLEANUP] Deleting project workspace... 00:31:55.388 [WS-CLEANUP] Deferred wipeout is used... 00:31:55.395 [WS-CLEANUP] done 00:31:55.396 [Pipeline] } 00:31:55.410 [Pipeline] // catchError 00:31:55.421 [Pipeline] sh 00:31:55.703 + logger -p user.info -t JENKINS-CI 00:31:55.713 [Pipeline] } 00:31:55.728 [Pipeline] // stage 00:31:55.734 [Pipeline] } 00:31:55.751 [Pipeline] // node 00:31:55.757 [Pipeline] End of Pipeline 00:31:55.808 Finished: SUCCESS